Fundamentals of Artificial Neural Networks - Book

Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 408

Fundamentals of Artificial Neural Networks

by Mohamad H. Hassoun
(MIT Press, 1995)

http://neuron.eng.wayne.edu/

Preface

My purpose in writing this book has been to give a systematic account of major
concepts and methodologies of artificial neural networks and to present a unified
framework that makes the subject more accessible to students and practitioners. This
book emphasizes fundamental theoretical aspects of the computational capabilities
and learning abilities of artificial neural networks. It integrates important theoretical
results on artificial neural networks and uses them to explain a wide range of existing
empirical observations and commonly used heuristics.

The main audience is first-year graduate students in electrical engineering, computer


engineering, and computer science. This book may be adapted for use as a senior
undergraduate textbook by selective choice of topics. Alternatively, it may also be
used as a valuable resource for practicing engineers, computer scientists, and others
involved in research in artificial neural networks.

This book has evolved from lecture notes of two courses on artificial neural networks,
a senior-level course and a graduate-level course, which I have taught during the last
6 years in the Department of Electrical and Computer Engineering at Wayne State
University.

The background material needed to understand this book is general knowledge of


some basic topics in mathematics, such as probability and statistics, differential
equations and linear algebra, and something about multivariate calculus. The reader is
also assumed to have enough familiarity with the concept of a system and the notion
of "state," as well as with the basic elements of Boolean algebra and switching theory.
The required technical maturity is that of a senior undergraduate in electrical
engineering, computer engineering, or computer science.

Artificial neural networks are viewed here as parallel computational models, with
varying degrees of complexity, comprised of densely interconnected adaptive
processing units. These networks are fine- grained parallel implementations of
nonlinear static or dynamic systems. A very important feature of these networks is
their adaptive nature, where "learning by example" replaces traditional
"programming" in solving problems. This feature makes such computational models
very appealing in application domains where one has little or incomplete
understanding of the problem to be solved but where training data is readily available.
Another key feature is the intrinsic parallelism that allows for fast computations of
solutions when these networks are implemented on parallel digital computers or,
ultimately, when implemented in customized hardware.

Artificial neural networks are viable computational models for a wide variey of
problems, including pattern classification, speech synthesis and recognition, adaptive
interfaces between humans and complex physical systems, function approximation,
image data compression, associative memory, clustering, forecasting and prediction,
combinatorial optimization, nonlinear system modeling, and control. These networks
are "neural" in the sense that they may have been inspired by neuroscience, but not
because they are faithful models of biologic neural or cognitive phenomena. In fact,
the majority of the network models covered in this book are more closely related to
traditional mathematical and/or statistical models such as optimization algorithms,
nonparametric pattern classifiers, clustering algorithms, linear and nonlinear filters,
and statistical regression models than they are to neurobiologic models.

The theories and techniques of artificial neural networks outlined here are fairly
mathematical, although the level of mathematical rigor is relatively low. In my
exposition I have used mathematics to provide insight and understanding rather than
to establish rigorous mathematical foundations.

The selection and treatment of material reflect my background as an electrical and


computer engineer. The operation of artificial neural networks is viewed as that of
nonlinear systems: Static networks are viewed as mapping or static input/output
systems, and recurrent networks are viewed as dynamical systems with evolving
"state." The systems approach is also evident when it comes to discussing the stability
of learning algorithms and recurrent network retrieval dynamics, as well as in the
adopted classifications of neural networks as discrete-state or continuous-state and
discrete-time or continuous-time. The neural network paradigms (architectures and
their associated learning rules) treated here were selected because of their relevence,
mathematical tractability, and/or practicality. Omissions have been made for a number
of reasons, including complexity, obscurity, and space.

This book is organized into eight chapters.


Chapter 1 introduces the reader to the most basic artificial neural net, consisting of a
single linear threshold gate (LTG). The computational capabilities of linear and
polynomial threshold gates are derived. A fundamental theorem, the function counting
theorem, is proved and is applied to study the capacity and the generalization
capability of threshold gates. The concepts covered in this chapter are crucial because
they lay the theoretical foundations for justifying and exploring the more general
artificial neural network architectures treated in later chapters.

Chapter 2 mainly deals with theoretical foundations of multivariate function


approximation using neural networks. The function counting theorem of chapter 1 is
employed to derive upper bounds on the capacity of various feedforward nets of
LTGs. The necessary bounds on the size of LTG-based multilayer classifiers for the
cases of training data in general position and in arbitrary position are derived.
Theoretical results on continuous function approximation capabilities of feedforward
nets, with units employing various nonlinearities, are summarized. The chapter
concludes with a discussion of the computational effectiveness of neural net
architectures and the efficiency of their hardware implementations.

Learning rules for single-unit and single -ayer nets are covered in Chapter 3. More
than 20 basic discrete-time learning rules are presented. Supervised rules are
considered first, followed by reinforcement, Hebbian, competitive, and feature
mapping rules. The presentation of these learning rules is unified in the sense that
they may all be viewed as realizing incremental steepest-gradient-descent search on a
suitable criterion function. Examples of single-layer architectures are given to
illustrate the application of unsupervised learning rules (e.g., principal component
analysis, clustering, vector quantization, and self-organizing feature maps).

Chapter 4 is concerned with the theoretical aspects of supervised, unsupervised, and


reinforcement learning rules. The chapter starts by developing a unifying framework
for the charaterization of various learning rules (supervised and unsupervised). Under
this framework, a continuous-time learning rule is viewed as a first-order stochastic
differential equation/ dynamical system whereby the state of the system evolves so as
to minimize an associated instantaneous criterion function. Statistical approximation
techniques are employed to study the dynamics and stability, in an "average" sense, of
the stochastic system. This approximation leads to an "average learning equation"
that, in most cases, can be cast as a globally, asymptotically stable gradient system
whose stable equilibria are minimizers of a well-defined criterion function. Formal
analysis is provided for supervised, reinforcement, Hebbian, competitive, and
topology-preserving learning. Also, the generalization properties of deterministic and
stochastic neural nets are analyzed. The chapter concludes with an investigatinon of
the complexity of learning in multilayer neural nets.

Chapter 5 deals with learning in multilayer artificial neural nets. It extends the
gradient descent-based learning to multilayer feedforward nets, which results in the
back error propagation learning rule (or backprop). An extensive number of methods
and heuristics for improving backprop's convergence speed and solution quality are
presented, and an attempt is made to give a theoretical basis for such methods and
heuristics. Several significant applications of backprop-trained multilayer nets are
described. These applications include conversion of English text to speech, mapping
of hand gestures to speech, recognition of handwritten ZIP codes, continuous vehicle
navigation, and medical diagnosis. The chapter also extends backprop to recurrent
networks capable of temporal association, nonlinear dynamical system modeling, and
control.

Chapter 6 is concerned with other important adaptive multilayer net architectures,


such as the radial basis function (RBF) net and the cerebeller model articulation
controller (CMAC) net, and their associated learning rules. These networks often have
similar computational capabilities to feedforward multilayer nets of sigmoidal units,
but with the potential for faster learning. Adaptive mulilayer unit-allocating nets such
as hyperspherical classifiers, restricted Coulomb energy (RCE) net, and cascade-
correlation net are discussed. The chapter also addresses the issue of unsupervised
learning in multilayer nets, and it describes two specific networks [adaptive resonance
theory (ART) net and the autoassociative clustering net] suitable for adaptive data
clustering. The clustering capabilities of these nets are demonstrated through
examples, including the decomposition of complex electromyogram signals.

Chapter 7 discusses associative neural memories. Various models of associative


learning and retrieval are presented and analyzed, with emphasis on recurrent models.
The stability, capacity, and error-correction capabilities of these models are analyzed.
The chapter concludes by describing the use of one particular recurrent model (the
Hopfield continuous model) for solving combinatorial optimization problems.

Global search methods for optimal learning and retrieval in multilayer neural
networks is the topic of Chapter 8. It covers the use of simulated annealing, mean-
field annealing, and genetic algorithms for optimal learning. Simulated annealing is
also discussed in the context of local-minima-free retrievals in recurrent neural
networks (Boltzmann machines). Finally, a hybrid genetic algorithm/gradient-descent
search method that combines optimal and fast learning is described.

Each chapter concludes with a set of problems designed to allow the reader to further
explore the concepts discussed. More than 200 problems of varying degrees of
difficulty are provided. The problems can be divided roughly into three categories.
The first category consists of problems that are relatively easy to solve. These
problems are designed to directly reinforce the topics discussed in the book. The
second category of problems, marked with an asterisk (*), is relatively more difficult.
These problems normally involve mathematical derivations and proofs and are
intended to be thought provoking. Many of these problems include reference to
technical papers in the literature that may give complete or partial solutions. This
second category of problems is intended mainly for readers interested in exploring
advanced topics for the purpose of stimulating original research ideas. Problems
marked with a dagger ( ) represent a third category of problems that are numerical in
nature and require the use of a computer. Some of these problems are mini
programming projects, which should be especially useful for students.

This book contains enough material for a full semester course on artificial neural
networks at the first-year graduate level. I have also used this material selectively to
teach an upper-level undergraduate introductory course. For the undergraduate course,
one may choose to skip all or a subset of the following material: Sections 1.4-1.6, 2.1-
2.2, 4.3-4.8, 5.1.2, 5.4.3- 5.4.5, 6.1.2, 6.2-6.4, 6.4.2, 7.2.2, 7.4.1-7.4.4, 8.3.2, 8.4.2,
and 8.6.

I hope that this book will prove useful to those students and practicing professionals
who are interested not only in understanding the underlying theory of artificial neural
networks but also in pursuing research in this area. A list of about 700 relevent
references is included with the aim of providing guidance and direction for the
readers' own search of the research literature. Even though this reference list may
seem comprehensive, the published literature is too extensive to allow such a list to be
complete.

Acknowledgments
First and foremost, I acknowledge the contributions of the many researchers in the
area of artificial neural networks on which most of the material in this text is based. It
would have been extremely difficult (if not impossible) to write this book without the
support and assistance of a number of organizations and individuals. I would first like
to thank the National Science Foundation (through a PYI Award), Electric Power
Research Institute (EPRI), Ford Motor Company, Mentor Graphics, Sun Micro
Systems, Unisis Corporation, Whitaker Foundation, and Zenith Data Systems for
supporting my research. I am also grateful for the support I have received for this
project from Wayne State University through a Career Development Chair Award.

I thank my students, who have made classroom use of preliminary versions of this
book and whose questions and comments have definitely enhanced it. In particular, I
would like to thank Raed Abu Zitar, David Clark, Mike Finta, Jing Song, Agus
Sudjianto, Chuanming (Chuck) Wang, Hui Wang, Paul Watta, and Abbas Youssef. I
also would like to thank my many colleagues in the artificial neural networks
community and at Wayne State University, especially Dr. A. Robert Spitzer, for many
enjoyable and productive conversations and collaborations.

I am in debt to Mike Finta, who very capably and enthusiastically typed the complete
manuscript and helped with most of the artwork, and to Dr. Paul Watta of the
Computation and Neural Networks Laboratory, Wayne State University, for his
critical reading of the manuscript and assistance with the simulations that led to
Figures 5.3.8 and 5.3.9.

My deep gratitude goes to the reviewers for their critical and constructive suggestions.
They are Professors Shun-Ichi Amari of the University of Tokyo, James Anderson of
Brown University, Thomas Cover of Stanford University, Richard Golden of the
University of Texas-Dallas, Laveen Kanal of the University of Maryland, John Taylor
of King's College London, Francis T. S. Yu of the University of Pennsylvania, Dr.
Granino Korn of G. A. and T. M. Korn Industrial Consultants, and other anonymous
reviewers.

Finally, let me thank my wife Amal, daughter Lamees, and son Tarek for their quiet
patience through the many lonely hours during the preparation of the manuscript.

Mohamad H. Hassoun
Detroit, 1994
Table of Contents
Fundamentals of Artificial Neural Networks
by Mohamad H. Hassoun
(MIT Press, 1995)

Chapter 1 Threshold Gates

1.0 Introduction
1.1 Threshold Gates
1.1.1 Linear Threshold Gates
1.1.2 Quadratic Threshold Gates
1.1.3 Polynomial Threshold Gates
1.2 Computational Capabilities of Polynomial Threshold Gates
1.3 General Position and the Function Counting Theorem
1.3.1 Weierstrass's Approximation Theorem
1.3.2 Points in General Position
1.3.3 Function Counting Theorem
1.3.4 Separability in f-Space
1.4 Minimal PTG Realization of Arbitrary Switching Functions
1.5 Ambiguity and Generalization
1.6 Extreme Points
1.7 Summary
Problems

Chapter 2 Computational Capabilities of Artificial Neural Networks

2.0 Introduction
2.1 Some Preliminary Results on Neural Network Mapping Capabilities
2.1.1 Network Realization of Boolean Functions
2.1.2 Bounds on the Number of Functions Realizable by a Feedforward
Network of LTG's
2.2 Necessary Lower Bounds on the Size of LTG Networks
2.2.1 Two Layer Feedforward Networks
2.2.2 Three Layer Feedforward Networks
2.2.3 Generally Interconnected Networks with no Feedback
2.3 Approximation Capabilities of Feedforward Neural Networks for Continuous
Functions
2.3.1 Kolmogorov's Theorem
2.3.2 Single Hidden Layer Neural Networks are Universal Approximators
2.3.3 Single Hidden Layer Neural Networks are Universal Classifiers
2.4 Computational Effectiveness of Neural Networks
2.4.1 Algorithmic Complexity
2.4.2 Computational Energy
2.5 Summary
Problems

Chapter 3 Learning Rules


3.0 Introduction
3.1 Supervised Learning in a Single Unit Setting
3.1.1 Error Correction Rules
Perceptron Learning Rule Generalizations of the Perceptron Learning Rule
The Perceptron Criterion Function
Mays Learning Rule
Widrow-Hoff (alpha-LMS) Learning Rule
3.1.2 Other Gradient Descent-Based Learning Rules
mu-LMS Learning Rule
The mu-LMS as a Stochastic Process
Correlation Learning Rule
3.1.3 Extension of the mu-LMS Rule to Units with Differentiable Activation
Functions: Delta Rule
3.1.4 Adaptive Ho-Kashyap (AHK) Learning Rules
3.1.5 Other Criterion Functions
3.1.6 Extension of Gradient Descent-Based Learning to Stochastic Units
3.2 Reinforcement Learning
3.2.1 Associative Reward-Penalty Reinforcement Learning Rule
3.3 Unsupervised Learning
3.3.1 Hebbian Learning
3.3.2 Oja's Rule
3.3.3 Yuille et al. Rule
3.3.4 Linsker's Rule
3.3.5 Hebbian Learning in a Network Setting: Principal Component Analysis
(PCA)
PCA in a Network of Interacting Units
PCA in a Single Layer Network with Adaptive Lateral Connections
3.3.6 Nonlinear PCA
3.4 Competitive learning
3.4.1 Simple Competitive Learning
3.4.2 Vector Quantization
3.5 Self-Organizing Feature Maps: Topology Preserving Competitive Learning
3.5.1 Kohonen's SOFM
3.5.2 Examples of SOFMs
3.6 Summary
Problems

Chapter 4 Mathematical Theory of Neural Learning

4.0 Introduction
4.1 Learning as a Search Mechanism
4.2 Mathematical Theory of Learning in a Single Unit Setting
4.2.1 General Learning Equation
4.2.2 Analysis of the Learning Equation
4.2.3 Analysis of some Basic Learning Rules
4.3 Characterization of Additional Learning Rules
4.3.1 Simple Hebbian Learning
4.3.2 Improved Hebbian Learning
4.3.3 Oja's Rule
4.3.4 Yuille et al. Rule
4.3.5 Hassoun's Rule
4.4 Principal Component Analysis (PCA)
4.5 Theory of Reinforcement Learning
4.6 Theory of Simple Competitive Learning
4.6.1 Deterministic Analysis
4.6.2 Stochastic Analysis
4.7 Theory of Feature Mapping
4.7.1 Characterization of Kohonen's Feature Map
4.7.2 Self-Organizing Neural Fields
4.8 Generalization
4.8.1 Generalization Capabilities of Deterministic Networks
4.8.2 Generalization in Stochastic Networks
4.9 Complexity of Learning
4.10 Summary
Problems

Chapter 5 Adaptive Multilayer Neural Networks I

5.0 Introduction
5.1 Learning Rule for Multilayer Feedforward Neural Networks
5.1.1 Error Backpropagation Learning Rule
5.1.2 Global Descent-Based Error Backpropagation
5.2 Backprop Enhancements and Variations
5.2.1 Weights Initialization
5.2.2 Learning Rate
5.2.3 Momentum
5.2.4 Activation Function
5.2.5 Weight Decay, Weight Elimination, and Unit Elimination
5.2.6 Cross-Validation
5.2.7 Criterion Functions
5.3 Applications
5.3.1 NetTalk
5.3.2 Glove-Talk
5.3.3 Handwritten ZIP Code Recognition
5.3.4 ALVINN: A Trainable Autonomous Land Vehicle
5.3.5 Medical Diagnosis Expert Net
5.3.6 Image Compression and Dimensionality Reduction
5.4 Extensions of Backprop for Temporal Learning
5.4.1 Time-Delay Neural Networks
5.4.2 Backpropagation Through Time
5.4.3 Recurrent Back-Propagation
5.4.4 Time-Dependent Recurent Back-Propagation
5.4.5 Real-Time Recurrent Learning
5.5 Summary
Problems

Chapter 6 Adaptive Multilayer Neural Networks II

6.0 Introduction
6.1 Radial Basis Function (RBF) Networks
6.1.1 RBF Networks versus Backprop Networks
6.1.2 RBF Network Variations
6.2 Cerebeller Model Articulation Controller (CMAC)
6.2.1 CMAC Relation to Rosenblatt's Perceptron and Other Models
6.3 Unit-Allocating Adaptive Networks
6.3.1 Hyperspherical Classifiers
Restricted Coulomb Energy (RCE) Classifier
Real-Time Trained Hyperspherical Classifier
6.3.2 Cascade-Correlation Network
6.4 Clustering Networks
6.4.1 Adaptive Resonance Theory (ART) Networks
6.4.2 Autoassociative Clustering Network
6.5 Summary
Problems

Chapter 7 Associative Neural Memories

7.0 Introduction
7.1 Basic Associative Neural Memory Models
7.1.1 Simple Associative Memories and their Associated Recording Recipes
Correlation Recording Recipe
A Simple Nonlinear Associative Memory Model
Optimal Linear Associative Memory (OLAM)
OLAM Error Correction Capabilities
Strategies for Improving Memory Recording
7.1.2 Dynamic Associative Memories (DAM)
Continuous-Time Continuous-State Model
Discrete-Time Continuous-State Model
Discrete-Time Discrete-State Model
7.2 DAM Capacity and Retrieval Dyanamics
7.2.1 Correlation DAMs
7.2.2 Projection DAMs
7.3 Characteristics of High-Performance DAMs
7.4 Other DAM Models
7.4.1 Brain-State-in-a-Box (BSB) DAM
7.4.2 Non-Monotonic Activations DAM
Discrete Model
Continuous Model
7.4.3 Hysteretic Activations DAM
7.4.4 Exponential Capacity DAM
7.4.5 Sequence Generator DAM
7.4.6 Heteroassociative DAM
7.5 The DAM as a Gradient Net and its Application to Combinatorial Optimization
7.6 Summary
Problems

Chapter 8 Global Search Methods for Neural Networks

8.0 Introduction
8.1 Local versus Global Search
8.1.1 A Gradient Descent/Ascent Search Strategy
8.1.2 Stochastic Gradient Search: Global Search via Diffusion
8.2 Simulated Annealing-Based Global Search
8.3 Simulated Annealing for Stochastic Neural Networks
8.3.1 Global Convergence in a Stochastic Recurrent Neural Net: The
Boltzmann Machine
8.3.2 Learning in Boltzmann Machines
8.4 Mean-Field Annealing and Deterministic Boltzmann Machines
8.4.1 Mean-Field Retrieval
8.4.2 Mean-Field Learning
8.5 Genetic Algorithms in Neural Network Optimization
8.5.1 Fundamentals of Genetic Search
8.5.2 Application of Genetic Algorithms to Neural Networks
8.6 Genetic Algorithm Assisted Supervised Learning
8.6.1 Hybrid GA/Gradient Descent Method for Feedforward Multilayer Net
Training
8.6.2 Simulations
8.7 Summary
Problems

References
Index
1. THRESHOLD GATES
1.0 Introduction

Artificial neural networks are parallel computational models, comprised of densely interconnected
adaptive processing units. These networks are fine-grained parallel implementations of nonlinear static
or dynamic systems. A very important feature of these networks is their adaptive nature where
"learning by example" replaces "programming" in solving problems. This feature makes such
computational models very appealing in application domains where one has little or incomplete
understanding of the problem to be solved, but where training data is available. Another key feature is
the intrinsic parallel architecture which allows for fast computation of solutions when these networks
are implemented on parallel digital computers or, ultimately, when implemented in customized
hardware.

Artificial neural networks are viable computational models for a wide variety of problems. These
include pattern classification, speech synthesis and recognition, adaptive interfaces between humans
and complex physical systems, function approximation, image compression, associative memory,
clustering, forecasting and prediction, combinatorial optimization, nonlinear system modeling, and
control. These networks are "neural" in the sense that they may have been inspired by neuroscience,
but not necessarily because they are faithful models of biological neural or cognitive phenomena. In
fact, the majority of the networks covered in this book are more closely related to traditional
mathematical and/or statistical models such as non-parametric pattern classifiers, clustering algorithms,
nonlinear filters, and statistical regression models than they do with neurobiological models.

The "artificial neuron" is the basic building block/processing unit of an artificial neural network. It is
necessary to understand the computational capabilities of this processing unit as a prerequisite for
understanding the function of a network of such units. The artificial neuron model considered here is
closely related to an early model used in threshold logic (Winder, 1962; Brown, 1964; Cover, 1964;
Dertouzos, 1965; Hu, 1965; Lewis and Coates, 1967; Sheng, 1969; Muroga, 1971). Here, an
approximation to the function of a biological neuron is captured by the linear threshold gate
(McCulloch and Pitts, 1943).

This chapter investigates the computational capabilities of a linear threshold gate (LTG). Also in this
chapter, the polynomial threshold gate (PTG) is developed as a generalization of the LTG, and its
computational capabilities are studied. An important theorem, known as the Function Counting
Theorem, is proved and is used to determine the statistical capacity of LTG's and PTG's. Then, a
method for minimal parameter PTG synthesis is developed for the realization of arbitrary binary
mappings (switching functions). Finally, the chapter concludes by defining the concepts of ambiguous
and extreme points and applies them to study the generalization capability of threshold gates and to
determine the average amount of information necessary for characterizing large data sets by threshold
gates, respectively.

1.1 Threshold Gates

1.1.1 Linear Threshold Gates

The basic function of a linear threshold gate (LTG) is to discriminate between labeled points (vectors)
belonging to two different classes. An LTG maps a vector of input data, x, into a single binary output,
y. The transfer function of an LTG is given analytically by
(1.1.1)

where x = [x1 x2 ... xn]T and w = [w1 w2 ... wn]T are the input and weight (column) vectors, respectively,
and T is a threshold constant. Figure 1.1.1 (a) shows a symbolic representation of an LTG with n
inputs. A graphical representation of Equation (1.1.1) is shown in Figure 1.1.1 (b).

(a) (b)

Figure 1.1.1. (a) Symbolic representation of a linear threshold gate and (b) its transfer function.

The vector x in Equation (1.1.1) is n dimensional with binary or real components (i.e., x {0, 1}n or

x Rn) and . Thus, the LTG output y may assume either of the following mapping forms:

or

An LTG performs a linear weighted-sum operation followed by a nonlinear hard clipping/thresholding


operation, as described in Equation (1.1.1). Figure 1.1.2 shows an example LTG which realizes the
Boolean function y given by:

(1.1.2)
where x1 and x2 belong to {0, 1}. Equation (1.1.2) reads y = [(NOT x1) AND x2] OR [x2 AND (NOT

x3)]. Here, the weight vector w = [1 2 1]T and threshold T = lead to a correct realization of y. One
way of arriving at the solution for w and T is to directly solve the following set of eight inequalities:

0 < T w 1 + w2 T

w1 < T w 1 + w3 < T

w2 T w 2 + w3 T

w3 < T w 1 + w2 + w3 < T

These inequalities are obtained by substituting all eight binary input combinations (x1, x2, x3) and their
associated y values from Equation (1.1.2), in Equation (1.1.1). For example, for input
(x1, x2, x3) = (0, 0, 0), the output y [using Equation (1.1.2)] is given by y = 0. Hence, for a proper
operation of the LTG, we require: 0w1 + 0w2 + 0w3 < T, which gives the first of the above eight
inequalities: 0 < T. The other seven inequalities are obtained similarly for each of the remaining cases:
(x1, x2, x3) = (0, 0, 1) through (1, 1, 1). It should be noted that the solution given in Figure 1.1.2 is one
of an infinite number of possible solutions for the above set of inequalities.

Figure 1.1.2. Example of an LTG realization of the Boolean function


.

There exists a total of unique Boolean functions (switching functions) of n variables, there are 2n

combinations of n independent binary variables which lead to unique ways of labeling these 2n
combinations into two distinct categories (i.e., 0 or 1). It can be shown (see Section 1.2) that a single n-

input LTG is capable of realizing only a small subset of these Boolean functions (refer to Figure
1.1.3). A Boolean function which can be realized by a single LTG is known as a threshold function. A
threshold function is a linearly separable function, that is, a function with inputs belonging to two
distinct categories (classes) such that the inputs corresponding to one category may be perfectly,
geometrically separated from the inputs corresponding to the other category by a hyperplane. Any
function that is not linearly separable, such as the exclusive-OR (XOR) function

, cannot be realized using a single LTG and is termed a non-threshold


function. Linear and non-linear separability are illustrated in Figure 1.1.4 (a) and (b), respectively.
Figure 1.1.3. Pictorial representation depicting the set of threshold functions as a small subset of the set
of all Boolean functions.

(a) (b)

Figure 1.1.4. Linear versus non-linear separability: (a) Linearly separable function, and (b) non-linearly
separable function. Filled circles and open squares designate points in the first class and second class,
respectively.

Threshold functions have been exhaustively enumerated for small n (Cameron, 1960 ; Muroga, 1971)
as shown in Table 1.1.1. This table shows the limitation of a single LTG with regard to the realization
of an arbitrary Boolean function. Here, as , the ratio of the number of LTG-realizable Boolean
functions, Bn, to the total number of Boolean functions approaches zero; formally,

(1.1.3)

This result is verified in Section 1.2.

TOTAL NUMBER OF
NUMBER OF THRESHOLD BOOLEAN FUNCTIONS
n
FUNCTIONS
Bn
1 4 4
2 14 16
3 104 256
4 1,882 65,536
5 94,572 4.3x109
6 15,028,134 1.8x1019
7 8,378,070,864 3.4x1038
8 17,561,539,552,946 1.16x1077

Table 1.1.1. Comparison of the number of threshold functions versus the number of
all possible Boolean functions for selected values of n.

Although a single LTG cannot represent all Boolean functions, it is capable of


realizing the universal NAND (or NOR) logic operation (
and ). Hence, the LTG is a
universal logic gate; any Boolean function is realizable using a network of LTG's
(only two logic levels are needed). Besides the basic NAND and NOR functions,
though, an LTG is capable of realizing many more Boolean functions. Therefore a
single n-input LTG is a much more powerful gate than a single n-input NAND or
NOR gate.

For n 5, a Karnaugh map (or K-map) may be employed to identify thresholding


functions or to perform the decomposition of nonthreshold functions into two or more
factors, each of which will be a threshold function. This decomposition allows for
obtaining an LTG network realization for Boolean functions as illustrated later in
Section 2.1.1 of Chapter 2. Figure 1.1.5 shows the admissible K-map threshold
patterns for n = 3. The K-map for the threshold function in Equation (1.1.2) is shown
in Figure 1.1.6 along with its corresponding threshold pattern. Each admissible pattern
may be in any position on the map, provided that its basic topological structure is
preserved. Note that the complements of such patterns also represent admissible
threshold patterns (refer to Example 1.2.1 in Section 1.2 for an example). Admissible
patterns of Boolean functions of n variables are also admissible for functions of n + 1
or more variables (Kohavi, 1978).

Figure 1.1.5. Admissible Karnaugh map threshold patterns for n = 3.


Figure 1.1.6. A Karnaugh map representation for the threshold function
. The 1's of this function can be grouped as shown to form
one of the threshold patterns depicted in Figure 1.1.5.

1.1.2 Quadratic Threshold Gates

Given that for large n the number of threshold functions is very small in comparison
to the total number of available Boolean functions, one might try to design a yet more
powerful "logic" gate which can realize non-threshold functions. This can be
accomplished by expanding the number of inputs to an LTG. For example, one can do
this by feeding the products or ANDings of inputs as new inputs to the LTG. In this
case, we require a fixed preprocessing layer of AND gates which artificially increases
the dimensionality of the input space. We expect that the resulting Boolean function
(which is now only partially specified) becomes a threshold function and hence
realizable using a single LTG. This phenomenon is illustrated through an example
(Example 1.2.1) given in the next section.

The realization of a Boolean function by the above process leads to a quadratic


threshold gate (QTG). The general transfer characteristics for an n-input QTG are
given by

(1.1.4)

for , and

(1.1.5)

for . Note that the only difference between Equations (1.1.4) and (1.1.5) is
the range of the index j of the second summation in the double summation term. The
bounds on the double summations in Equations (1.1.4) and (1.1.5) eliminate the wijxixj
and wjixjxi duplications. QTG's greatly increase the number of realizable Boolean functions as
compared to LTG's. By comparing the number of degrees of freedom (number of weights plus
threshold) listed in Table 1.1.2, we find an increased flexibility of a QTG over an LTG.

Number of degrees of freedom/parameters


Threshold gate (including threshold)

LTG

QTG

Table 1.1.2. Comparison of the number of degrees of freedom in an LTG versus a


QTG.

1.1.3 Polynomial Threshold Gates

Although the QTG greatly increases the number of functions that can be realized, a
single QTG still cannot realize all Boolean functions of n variables. Knowing that a
second order polynomial expansion of inputs offers some improvements, it makes
sense to extend this concept to r-order polynomials. This results in a polynomial
threshold gate denoted PTG(r). Note that the LTG and QTG are special cases, where
LTG PTG(1) and QTG PTG(2). The general transfer equation for a PTG(r) is given
by

(1.1.6)

In this case, the number of degrees of freedom is given by

(1.1.7)

for , and

(1.1.8)
for . Here, the term gives the number of different combinations of m
different things, k at a time, without repetitions. The PTG appears to be a powerful
gate. It is worthwhile to investigate its capabilities.

1.2 Computational Capabilities of Polynomial Threshold Gates

Next, we consider the capabilities of PTG's in realizing arbitrary Boolean functions. Let us start with a
theorem which establishes the universality of a single n-input PTG(r) in the realization of arbitrary
Boolean functions.

Theorem 1.2.1 (Nilsson, 1965; Krishnan, 1966): Any Boolean function of n-variables can be realized
using a PTG of order r n.

The proof of this theorem follows from the discussion in Section 1.4. Theorem 1.2.1 indicates that r =
n is an upper bound on the order of the PTG for realizing arbitrary Boolean functions. It implies that
the most difficult n-variable Boolean function to implement by a PTG(r) requires r = n, and this may

require parameters. So, in the worst case, the number of required parameters
increases exponentially in n.

Winder (1963) gave the following upper bound on the number of realizable Boolean functions, Bn(r),
of n variables by a PTG(r), r n (see Section 1.3.4 for details):

(1.2.1)

Where d is given by Equation (1.1.7) and

(1.2.2)

A couple of special cases are interesting to examine. The first is when r = n. From Theorem 1.2.1, any
n-variable Boolean function is realizable using a single PTG(n). This means that the right hand side of
Equation (1.2.1) should not exceed the number of n-variable Boolean functions, , if it is to be a
tight upper bound. Indeed, this can be shown to be the case by first starting with Equation (1.2.1) and

then finding the limit of C(2n, d 1) as r approaches n. Since , we have

(1.2.3)

or
(1.2.4)

which is the desired result.

The other interesting case is for r = 1, which leads to the case of an LTG. Employing Equation (1.2.1)
with d = n + 1, n 2, gives the following upper bound on the number of n-input threshold functions:

(1.2.5)

It can be shown (Winder, 1963; see Problem 1.3.3) that a yet tighter upper bound on Bn(1) is .
Equation (1.2.5) allows us to validate Equation (1.1.3) by taking the following limit:

(1.2.6)

Table 1.2.1 extends Table 1.1.1 by evaluating the above upper bounds on the number of threshold
functions. Note that, despite the pessimistic fact implied by Equation (1.2.6), a single LTG remains a
powerful logic gate by being able to realize a very large number of Boolean functions. This can be seen
from the enumerated results in Table 1.2.1 (see the column labeled Bn(1)). In fact, Bn(1) scales
exponentially in n as can be deduced from the following lower bound (Muroga, 1965)

(1.2.7)

Total number
Number of Threshold
of Boolean Upper bounds for
functions
functions

1 4 4 4 4 2
2 14 16 14 16 16
3 104 256 128 170 512
4 1,882 65,536 3,882 5461 65,536
5 94,572 4.3 x 109 412,736 559,240 33,554,432
6 15,028,134 1.8 x 1019 1.5 x 108 1.9 x 108 6.9 x 1010
7 8,378,070,864 3.4 x 1038 1.9 x 1011 2.2 x 1011 5.6 x 1014
8 17,561,539,552,946 1.16 x 1077 8.2 x 1014 9.2 x 1014 1.8 x 1019

Table 1.2.1. Enumeration of threshold functions and evaluations of various upper bounds for n 8.
By its very definition, a PTG may be thought of as a two layer network with a fixed preprocessing
layer followed by a high fan-in LTG. Kaszerman (1963) showed that a PTG(r) with binary inputs can

be realized as the cascade of a layer of AND gates, each having a fan-in between two to

r input lines, and a single LTG with inputs (representing the inputs received from the AND
gates plus the original n inputs). The resulting architecture is shown in Figure 1.2.1.

Figure 1.2.1. PTG realization as a cascade of one layer of AND gates and a single LTG.

Example 1.2.1: Consider the XNOR function (also known as the equivalence function):

Using a Karnaugh map (shown in Figure 1.2.2), it can be verified that this function is not a threshold
function; therefore, it cannot be realized using a single 2-input LTG (a diagonal pattern of 1's in the K-
map is a non-threshold pattern and indicates a non-threshold/non-linearly separable function).

Figure 1.2.2. Karnaugh map of XNOR function

Since n = 2, Theorem 1.2.1 implies that a PTG(2) is sufficient; i.e., the QTG with three weights, shown
in Figure 1.2.3, should be sufficient to realize the XNOR function.
Figure 1.2.3. A PTG(2) (or QTG) with binary inputs.

By defining , we can treat the PTG as a 3-input LTG and generate the K-map shown in
Figure 1.2.4.

Figure 1.2.4. Karnaugh map for XNOR in expanded input space.

Since is defined as , there are some undefined states which we will refer to as "don't care"
states; these states can be assigned either a 1 or 0, to our liking, and give us the flexibility to identify
the threshold pattern shown in Figure 1.2.5 (this pattern is the complement of one of the threshold
patterns shown in Figure 1.1.5, and therefore it is an admissible threshold pattern).

Figure 1.2.5. Karnaugh map after an appropriate assignment of 1's and 0's to the "don't care" states of
the map in Figure 1.2.4.

The above K-map verifies that is realizable using a single LTG. The QTG in Figure
1.2.3 will realize the desired XNOR function with weight assignments as shown in Figure 1.2.6.
Dertouzos (1965) describes a tabular method for determining weights for LTG's with small n. This
method works well here, but will not be discussed. The topic of adaptive weight computation for LTG's
is treated in Chapter Three.
Figure 1.2.6. Weight assignment for a QTG for realizing the XNOR function.

A geometric interpretation of the synthesized QTG may be obtained by first employing Equation
(1.1.5), which gives

(1.2.8)

In arriving at this result, we took the liberty of interpreting the AND operation as multiplication. Note
that this interpretation does not affect the QTG in Figure 1.2.6 when the inputs are in {0, 1}. Next,
Equation (1.2.8) may be used to define a separating surface g that discriminates between the 1 and 0
vertices, according to

(1.2.9)

(here ' ' is the symbol for "defined as"). A plot of this function is shown in Figure 1.2.7, which
illustrates how the QTG employs a nonlinear separating surface in order to correctly classify all
vertices of the XNOR problem.

Figure 1.2.7. Separating surface realized by the QTG of Equation (1.2.9).

Another way of geometrically interpreting the operation of the QTG described by Equation (1.2.8) is
possible if we define the product x1x2 as a new input x3. Now, we can visualize the surface of Equation
(1.2.9) in 3-dimensions as a plane which properly separates the four vertices (patterns) of the XNOR
function (see Problem 1.2.4 for an exploration of this idea).

1.3 General Position and the Function Counting Theorem

In this section, we try to answer the following fundamental question: Given m points in {0, 1}n, how
many dichotomies of these m points are realizable with a single PTG(r), for r = 1, 2, ..., n? We are also
interested in answering the question for the more general case of m points in Rn. But first, we present a
classical theorem on the capability of polynomials as approximators of continuous functions.
1.3.1 Weierstrass' Approximation Theorem

Theorem 1.3.1 (Weierstrass' Approximation Theorem): Let g be a continuous real-valued function


defined on a closed interval [a, b]. Then, given any positive, there exists a polynomial y (which may
depend on ) with real coefficients such that

(1.3.1)

for every x [a, b].

The proof of this theorem can be found in Apostol (1957).

Theorem 1.3.1 is described by the statement: every continuous function can be "uniformly
approximated" by a polynomial. Note that the order of the polynomial depends on the function being
approximated and the desired accuracy of the approximation. Weierstrass' Approximation Theorem
also applies for the case of a continuous multivariate function g which maps a compact set Rn to a
compact set Y R (Narendra and Parthasarathy, 1990). Thus, a single PTG of unrestricted order r which
employs no thresholding is a universal approximator for continuous functions g: Y.

Theorem 1.3.1 can also be extended to non-polynomial functions. Let {1(x), , d(x)} be a complete
orthonormal set, then any g satisfying the requirements of Theorem 1.3.1 can be approximated by a
function

(1.3.2)

where the wj's are real constants.

Polynomials may also be used to approximate binary-valued functions defined on a finite set of points.
Examples of such functions are Boolean functions and functions of the form g: S {0, 1}, where S is a
finite set of arbitrary points in Rn. A PTG differs from a polynomial in that it has an intrinsic quantifier
(threshold nonlinearity) for the output state. This gives the PTG the natural capability for realizing
Boolean functions and complex dichotomies of arbitrary points in Rn. The remainder of this section
develops some theoretical results needed to answer the questions raised at the beginning of this section.

1.3.2 Points in General Position

Let us calculate the number of dichotomies of m points in Rn (ways of labeling m points into two
distinct categories) achieved by a linear separating surface (i.e., LTG). We call each of these
dichotomies a linear dichotomy. For m n-dimensional patterns, the number of linear dichotomies is
equal to twice the number of ways in which the m points can be partitioned by an (n 1)-dimensional
hyperplane (for each distinct partition, there are two different classifications).

As an example, consider the case m = 4 and n = 2. Figure 1.3.1 shows four points in a two-dimensional
space. The lines li, i = 1, ... , 7, give all possible linear partitions of these four points. In particular,
consider l5. It could be the decision surface implementing either of the following: (1) x1 and x4 in class
1, and x2 and x3 in class 2; or (2) x1 and x4 in class 2, and x2 and x3 in class 1. Thus, one may
enumerate all possible linear dichotomies to be equal to 14. If three of the points belong to the same
line (Figure 1.3.2), there are only six linear partitions. For m > n, we say that a set of m points is in

general position in Rn if and only if no subset of n + 1 points lies on an -dimensional


hyperplane; for m n, a set of m points is in general position if no (m 2)-dimensional hyperplane
contains the set. Thus the four points of Figure 1.3.1 are in general position, whereas the four points of
Figure 1.3.2 are not. Equivalently, a set of m points in Rn is in general position if every subset of n or
fewer points (vectors) is linearly independent, which implies that
(1.3.3)

Note that general position requires a stringent rank condition on the matrix [x1 x2 ... xm] (the matrix
[x1 x2 ... xm] has maximal rank n if at least one n × n submatrix has a nonzero determinant).

It can be shown (see Section 1.3.3) that for m points in general position with m n+1, the total number of
linear dichotomies is 2m. This means that a hyperplane is not constrained by the requirement of
correctly classifying n + 1 or fewer points in general position. Note that as n , a set of m random points
in Rn is in general position with probability approaching one.

Figure 1.3.1 Points in general position. Figure 1.3.2. Points not in general position.

1.3.3 Function Counting Theorem

The so-called Function Counting Theorem, which counts the number of linearly separable dichotomies
of m points in general position in Rd, is essential for estimating the separating capacity of an LTG or a
PTG and is considered next. This theorem is also useful in giving an upper bound on the number of
linear dichotomies for points in arbitrary position.

Theorem 1.3.2 (Function Counting Theorem; Cover, 1965): The number of linearly separable
dichotomies of m points in general position in Euclidean d-space is

(1.3.4)

The general position requirement on the m points is a necessary and sufficient condition.

Proof of Theorem 1.3.2:


Consider a set of m points, X = {x1, x2, ..., xm}, in general position in Rd. Let C(m, d) be the number of
linearly separable dichotomies {X+, X} of X. Here, X+ (X) is a subset of X consisting of all points which
lie above (below) the separating hyperplane.

Consider a new point, xm+1, such that the set of m + 1 points, X' = {x1, x2, ..., xm+1}, is in general
position. Now, some of the linear dichotomies of the set X can be achieved by hyperplanes which pass
through xm+1. Let the number of such dichotomies be D. For each of these D linear dichotomies there

will be two new dichotomies, and . This is because


when the points are in general position any hyperplane through xm+1 that realizes the dichotomy
{X+, X} can be shifted infinitesimally to allow arbitrary classification of xm+1 without affecting the
separation of the dichotomy {X+, X}. For the remaining C(m, d) D dichotomies, either

or must be separable. Therefore, there will be one


new linear dichotomy for each old one. Thus, the number of linear dichotomies of X', C(m + 1, d), is
given by

Again, D is the number of linear dichotomies of X that could have had the dividing hyperplane drawn
through xm+1. But this number is simply C(m, d 1), because constraining the hyperplane to go through
a particular point xm+1 makes the problem effectively d 1 dimensional. This observation allows us to
obtain the recursion relation

The repeated iteration of the above relation for m, m 1, m 2, ..., 1 yields

from which the theorem follows immediately on noting that C(1, N) = 2 (one point can be linearly

separated into one category or the other) and for m d+1.

Theorem 1.3.2 may now be employed to study the ability of a single LTG to separate m points in
general position in Rn. Since the total number of possible dichotomies in Rn is 2m, the probability of a
single n-input LTG to separate m points in general position (assuming equal probability for the 2m
dichotomies) is

(1.3.5)

Equation (1.3.5) is plotted in Figure 1.3.3. Note that if m < 2(n + 1), then as n approaches infinity, PLS
approaches one; i.e., the LTG almost always separates the m points. At m = 2(n + 1), exactly one half
of all possible dichotomies of the m points are linearly separable. We refer to m = 2(n + 1) as the
statistical capacity of the LTG. It is noted from Figure 1.3.3, that a single LTG is essentially not
capable of handling the classification of m points in general position when m > 3(n + 1).
Figure 1.3.3. Probability of linear separability of m points in general position in Rn.

1.3.4 Separability in Space

A PTG(r) with labeled inputs x Rn, may be viewed as an LTG with d 1 preprocessed inputs (see Figure

1.3.4), where . We refer to the mapping (x) from the input space Rn to the
-space Rd1 as the -mapping. A dichotomy of m points in Rn is said to be "-separable" by a PTG(r), if
there exists a set of d 1 PTG weights and a threshold which correctly classify all m points. That is, if
there exists a (d 2)-dimensional hyperplane in -space which correctly classifies all m points. The
inverse image of this hyperplane in the input space Rn defines a polynomial separating surface of order
r, which will be referred to as the -surface. Note that the Function Counting Theorem still holds true if

the set of m points is in general position in -space or, equivalently for


the case d m, if no d points lie on the same -surface in the input space. Here, we say
X = {x1, x2, ..., xm} is in -general position. Therefore, the probability of a single PTG with d degrees of
freedom (including threshold) to separate m points in -general position is given by

(1.3.6)

Figure 1.3.4. Block diagram of a PTG represented as the cascade of a fixed preprocessing layer and a
single LTG.
Since for an arbitrary set of m points in Rn, the number of -separable dichotomies, L(m, d 1), is less
than or equal to the number of -separable dichotomies of m points in -general position, we may write:

(1.3.7)

Thus, the number of Boolean functions, Bn(r), of n-variables which can be realized by an n-input
PTG(r) satisfies

Bn(r) (1.3.8)

which is exactly the inequality of Equation (1.2.1).

1.4 Minimal PTG Realization of Arbitrary Switching Functions

Theorem 1.4.1 (Kashyap, 1966): Any n-variable switching (Boolean) function of m points with m 2n, is
realizable using a PTG(r) with m or less terms (degrees of freedom).

Proof of Theorem 1.4.1:

Let g(x) represent the weighted-sum signal (the signal just before thresholding) for a PTG with m
degrees of freedom (here, the threshold T will be set to zero),

(1.4.1)

where

(1.4.2)

For example, if n = 4, x = [x1 x2 x3 x4]T, and xi = [ 1 1 0 1]T, Equation (1.4.2) gives i(x) = x1 x2 x4.
Note that by convention 00 = 1. Next, we rearrange the m points (vertices) of a given switching
function as x(0), x(1), ..., x(m1), so that the number of 1's in x(i) is larger than or equal to those in x(j) if i >
j. To find the separating surface, g(x) = 0, that separates the m points into the two designated classes (0
and 1), the function g(x) must satisfy:

, for i = 0, 1, ..., m 1 (1.4.3)

where
(1.4.4)

and w = [w0 w1 w2 ... wm 1]T is the PTG weight vector. If the vector y is normalized by multiplying
patterns of class 0 by 1, then Equation (1.4.3) may be rewritten in the following compact form

Aw = b 0 (1.4.5)

where b is an arbitrary positive margin vector. Here, A = TB, where B is defined as the m m matrix

(1.4.6)

and T is an m m diagonal matrix given by

(1.4.7)

The A matrix is a square matrix and its singularity depends on the B matrix. Using Equations (1.4.4)
and (1.4.6), the ijth component of B is given by

(1.4.8)

Since we have assumed that 00 = 1, Equation (1.4.8) gives Bii = 1, for i = 0, 1, ..., m 1. Now, since x(i)
has more 1's than x(j) when i > j, then Bij = 0 for i j. Hence, B is a lower triangular and nonsingular
matrix. Accordingly, A is a triangular nonsingular matrix, thus, the solution vector w exists and can be
easily calculated by forward substitution (e.g., see Gerald, 1978) in Equation (1.4.5). Note that some of
the components of the solution vector w may be forced to zero with proper selection of the margin
vector b. This completes the proof of the Theorem.

Example 1.4.1: Consider the partially specified non-threshold Boolean function in Table 1.4.1. The
patterns in Table 1.4.1 are shown sorted in an ascending order in terms of the number of 1's in each
pattern.

Ordered
Patterns
x1 x2 x3 Class
x(0) 0 0 0 1
x(1) 1 0 0 0
x(2) 0 1 0 0
x(3) 1 1 0 1
x(4) 0 1 1 1
x(5) 1 1 1 1
Table 1.4.1. A partially specified Boolean function for Example 1.4.1.

From Equation (1.4.8), the B matrix is computed as

and from Equation (1.4.7), the T matrix is given by

Thus, A = T B is given as

Now, using forward substitution in Equation (1.4.5) with b = [ 1 1 1 1 1 1]T, we arrive


at the solution:

w = [ 1 2 2 4 2 2]T

Substituting this w in Equation (1.4.1) allows us to write the equation for the separating surface (-
surface) realized by the above PTG as
g(x) = 1 2x1 2x2 + 4x1x2 + 2x2x3 2x1x2x3 = 0

In general, the margin vector may be chosen in such a way that some of the components of the weight
vector w are forced to zero, thus resulting in a simpler realization of the PTG. Experimental evidence

(Kashyap, 1966) indicates that the number of nonzero components of vector w varies roughly from

to about . Note also that for m = 2n, Theorem 1.4.1 is equivalent to Theorem 1.2.1. To prove this,

we note that according to Theorem 1.4.1, a PTG with is sufficient for


guaranteed realizability of m arbitrary points in {0,1}n. Now, for r = n, d takes on its largest possible
value of 2n, which is the largest possible value for m. Therefore, any Boolean function of n-variables is
realizable by a single PTG(r n) (this proves Theorem 1.2.1).

1.5 Ambiguity and Generalization

Consider the training set {x1, x2, ..., xm} and a class of decision surfaces [e.g., -surfaces associated with
a PTG(r)] which separate this set. The classification of a new pattern y is ambiguous, relative to the
given class of decision surfaces and with respect to the training set, if there exists two decision surfaces
that both correctly classify the training set but yield different classifications of the new pattern. In
Figure 1.5.1, points y1 and y3 are unambiguous, but point y2 is ambiguous for the class of linear
decision surfaces. In the context of a single threshold gate, it can be shown (Cover, 1965) that the
number of training patterns must exceed the statistical capacity of a threshold gate before ambiguity is
eliminated.

Generalization is the ability of a network (here, a PTG) to correctly classify new patterns. Consider the
problem of generalizing from a training set with respect to a given admissible family of decision
surfaces (e.g., the family of surfaces that can be implemented by a PTG). During the training phase, a
decision (separating) surface from the admissible class is synthesized which correctly assigns members
of the training set to the desired categories. The new pattern will be assigned to the category lying on
the same side of the decision surface. Clearly, for some dichotomies of the training set, the assignment
of category will not be unique. However, it is generally known that if a large number of training
patterns are used, the decision surface will be sufficiently constrained to yield a unique response to a
new pattern. Next, we show that the number of training patterns must exceed the statistical capacity of
the PTG before unique response becomes probable.
Figure 1.5.1. Ambiguous generalization. The points y1 and y3 are uniquely classified regardless of
which of the two linear decision surfaces shown is used. The point y2 has an ambiguous classification.

Theorem 1.5.1 (Cover, 1965): Let X {y} = {x1, x2, ..., xm, y} be in -general position in Rd1, then y is
ambiguous with respect to C(m, d 2) dichotomies of X relative to the class of all -surfaces.

Proof of Theorem 1.5.1:

The point y is ambiguous with respect to a dichotomy {X+, X} of X = {x1, x2, ..., xm} if and only if there
exists a -surface containing y which separates {X+, X}. This is because, when X is in -general position,
any -surface through y that realizes the dichotomy {X+, X} can be shifted infinitesimally to allow
arbitrary classification of y without affecting the separation of {X+, X}. Equivalently, since -general
position of X{y} implies that the set of points Z{(y)} = {(x1), (x2), ..., (xm), (y)} is in general position in
Rd1, the point (y) is ambiguous with respect to the linear dichotomy {Z+, Z} (here, each linear
dichotomy of Z, {Z+, Z}, corresponds to a unique -separable dichotomy {X+, X} in the input space).
Hence, the point (y) is ambiguous with respect to D linear dichotomies {Z+, Z} which can be separated
by a (d 2)-dimensional hyperplane constrained to pass through (y). Constraining the hyperplane to
pass through a point effectively reduces the dimension of the space by 1. Thus, by Theorem 1.3.2,
D = C(m, d 2). Now, by noting the one-to-one correspondence between the linear separable
dichotomies in -space and the -separable dichotomies in the input space we establish that y is
ambiguous with respect to C(m, d 2) -separable dichotomies of X.

Now, if each of the -separable dichotomies of X has equal probability, then the probability, A(m, d),
that y is ambiguous with respect to a random -separable dichotomy of X is
Example 1.5.1: Let us apply the above theorem to Figure 1.5.1, where m = 10 and d = 3 (including the
bias input). A new point y is ambiguous with respect to C(10, 1) = 20 dichotomies of the m points
relative to the class of all lines in the plane. Now, C(10, 2) = 92 dichotomies of the m points are
separable by the class of all lines in the plane. Thus, a new point y is ambiguous with respect to a
random, linearly separable dichotomy of the m points with probability

The behavior of A(m, d) for large d is given by (Cover, 1965)

(1.5.2)

where = m/d. The function A*() is plotted in Figure 1.5.2. It is interesting to note that according to
Equation (1.5.2), ambiguous response is reduced only when ; i.e., when m is larger than the
statistical capacity of the PTG (or more generally, the separating surface used).

Figure 1.5.2. Asymptotic probability of ambiguous response.

To eliminate ambiguity, we need 2 or m 2d; that is, a large number of training samples is required. If
we define 0 ε 1 as the probability of generalization error, then we can determine the number of
samples required to achieve a desired ε . Assume that we choose m points (patterns) from a given
distribution such that these points are in general position and that they are classified independently with
equal probability into one of two categories. Then, the probability of generalization error on a new

pattern similarly chosen, conditioned on the separability of the entire set, is equal to A(m, d).
Therefore, as d, one can employ Equation (1.5.2) and show that m must satisfy

(1.5.3)

in order to bound the probability of generalization error below ε . Thus, we find that the probability of
generalization error for a single threshold gate with d degrees of freedom approaches zero as fast as

in the limit of m >> d.


1.6 Extreme Points

For any linear dichotomy {X+, X} of a set of points, there exists a minimal sufficient subset of extreme
points such that any hyperplane correctly separating this subset must separate the entire set correctly.
From this definition it follows that a point is an extreme point of the linear dichotomy {X+, X} if and
only if it is ambiguous with respect to {X+, X}. Thus, for a set of m points in general position in Rd, each
of the m points is ambiguous with respect to precisely C(m 1, d 1) linear dichotomies of the remaining

m 1 points. Here, each of the dichotomies is the restriction of two linearly separable
dichotomies of the original m points; the two dichotomies which differs only in the classification of the
remaining point. Therefore, each of the m points is an extreme point with respect to 2C(m 1, d 1)
dichotomies. Since there are a total of C(m, d) linearly separable dichotomies, the probability, Pe, of a
point to be an extreme point with respect to a randomly generated dichotomy is (assuming
equiprobable dichotomies)

(1.6.1)

Then, the expected number of extreme points in a set of m points in general position in Rd, is equal to
the sum of the m probabilities that each point is an extreme point. Since these probabilities are equal,
the expected number of extreme points can be written as

(1.6.2)

Figure 1.6.1 illustrates the effect of the size of the training set m on the normalized expected number of

extreme points, .

Figure 1.6.1. Normalized expected number of extreme points for d = 5, 20 and as a function of
.

For large d, with = m/d, the normalized expected number of extreme points is given by (Cover, 1965)

(1.6.3)
which agrees with the simulation results in Figure 1.6.1 for d . The above limit on the expected
number of extreme points implies, roughly, that for a random separable set of m patterns, the average
number of patterns (information) which need to be stored for a complete characterization of the whole
set is only twice the number of degrees of freedom of the class of separating surfaces being used. The
implication to pattern recognition is that the essential information in an infinite training set can be
expected to be loaded into a network of finite capacity (e.g., a PTG of finite order or a finite network of
simple gates).

1.7 Summary

This chapter introduces the LTG as the basic computational unit in a binary state neural network, and
analyzes its computational capabilities in realizing binary mappings. The LTG is then extended to the
more powerful PTG. A single LTG can only realize threshold (linearly separable) functions. On the
other hand, a PTG with order is capable of realizing any Boolean function of n inputs. The
impressive computational power of a PTG comes at the expense of increased implementation
complexity.

The notion of a separating surface is introduced and it is used to describe the operation of a threshold
gate as a two-class classifier. It is found that a single LTG is characterized by a linear separating
surface (hyperplane). On the other hand, a single PTG is capable of realizing a highly-flexible
nonlinear (polynomial) separating surface. The flexibility of a PTG's separating surface is due to a
dimensionality expanding polynomial mapping (-mapping) which is realized by a preprocessing layer
intrinsic to the PTG.

A fundamental theorem, known as the Function Counting Theorem, is proved and is used to derive the
following important properties of threshold gates. First, the statistical capacity of a single LTG,
assuming data in general position in Rn, is equal to twice the number of its degrees of freedom
(weights). This result is also true for a PTG. Second, a threshold gate with d-degrees of freedom trained
to correctly classify m patterns will, on the average, respond ambiguously to a new input pattern as
long as . In this context, it is found that the probability of generalization error approaches

zero asymptotically as . Finally, in the limit as , we find that the average number of
patterns needed for complete characterization of a given training set is only twice the number of
degrees of freedom d of a threshold gate, assuming the separability of this training set by the threshold
gate.

Problems:

1.1.1 Verify that the patterns in Figure 1.1.5 are admissible threshold patterns.

1.1.2 Derive the formulas given in Table 1.1.2 for degrees of freedom for real and binary input QTG's.

1.1.3 Prove that a PTG(r) with n binary inputs has degrees of freedom, not including
threshold.

1.1.4 Derive the closed form expression for the number of degrees of freedom for a PTG (r) given in
Equation (1.1.8).
1.2.1 Show that . Also, show that .

1.2.2 a. Prove algebraically that , for 1 i n.

b. Prove by induction, using the recursion relation in part (a), the Binomial Theorem:

where and are real numbers and n is an integer.

c. Use the Binomial Theorem to prove that .

1.2.3 Verify the inequality in Equation (1.2.5) by showing that for n 2.

1.2.4 Plot the function

for a = 2 and 3. Now, let x3 = x1x2. Plot g(x1, x2, x3) = 0. Show that the four patterns of the XNOR
function of Example 1.2.1 are properly separated by g.

* 1.3.1 Prove that , and that . Also,

show that , if m = 2(n + 1).

1.3.2 Assume n < m d = , is it true that m points in general position in Rn are mapped by the
-mapping of a PTG(r) into points in general position in Rd1 space (i.e., -general position)? Why? Can
this PTG map m arbitrary points in Rn into m points in -general position? Explain.

* 1.3.3 Given m arbitrary points in Rn space. Show that for m 3n+2 and n 2, the number of dichotomies

of the m points which are realizable with a single LTG is bounded from above by . (Hint: use
Equation (1.3.7) adapted for the LTG case, with d 1 = n). Note that this bound reduces to for m
= 2n, which represents a tighter upper bound than that of Equation (1.2.5). Use this result to show that

1.3.4 Consider a mapping f : R R, defined as the set of input/output pairs {xi, yi}, i = 1, 2, ..., m.
Assume that xi xj for all i j. Show that this mapping can be exactly realized by a polynomial of order
r = m 1 (this is referred to as "strict" interpolation). Hint: The determinant

is the Vandermonde's Determinant, which is nonzero if zi zj, for all i j = 1, 2, ..., m.

* 1.3.5 Consider the mapping from R1 to {0, 1} defined by the input/output pairs (0, 0), (1, 1), (2, 0),
(3, 0), and (4, 1). Use the result of Problem 1.3.4 to synthesize a polynomial g(x) of minimal order
which realizes the above mapping. Plot g(x) to verify your solution. Next, assume that a PTG(r) is used
to realize the same mapping. Is it possible for the order r of this PTG to be smaller than that of the
polynomial g(x)? If the answer is yes, then synthesize the appropriate weights for this PTG (assume a
zero threshold) and plot the weighted-sum (i.e., the PTG output without thresholding) versus x.

1.3.6 Derive Equation (1.3.4) from the recursion relation .

1.3.7 Let y(x) = a + bx, where a, b R. Find the parameters a and b which minimize:

a.

b.

1.3.8 Find the polynomial y(x) = ax + bx3, which approximates the function g(x) = sin x over [0, ]. By

minimizing . Compare graphically, the functions g(x), its approximation y(x),

and its power series approximation g(x) x for and .


1.4.1 Find the solution vector w in Example 1.4.1 if b = [1 1 1 1 1 3]T . Use the K-map technique to
show that the Boolean function in Table 1.4.1 is not a threshold function. Does a b > 0 exist which
leads to a solution vector w with four or fewer nonzero components?

1.4.2 Use the method of Section 1.4 to synthesize a PTG for realizing the Boolean function
. Use a margin vector b which results in the minimal number of non-zero PTG
weights.

* 1.5.1 Derive Equation (1.5.2). See Cover (1965) for hints.

† 1.5.2 Plot the probability of ambiguous response, A(m, d), given in Equation (1.5.1) versus ,
for d = 2, 10, and 20.

1.5.3 Derive Equation (1.5.3).

* 1.6.1 Derive Equation (1.6.3), starting from Equation (1.6.2).


2. COMPUTATIONAL CAPABILITIES OF
ARTIFICIAL NEURAL NETWORKS
2.0 Introduction

In the previous chapter, the computational capabilities of single LTG's and PTG's were investigated. In
this chapter, networks of LTG's are considered and their mapping capabilities are investigated. The
function approximation capabilities of networks of units (artificial neurons) with continuous nonlinear
activation functions are also investigated. In particular, some important theoretical results on the
approximation of arbitrary multivariate continuous functions by feedforward multilayer neural
networks are presented. This chapter concludes with a brief section on neural network computational
complexity and the efficiency of neural network hardware implementation. In the remainder of this
book, the terms artificial neural network, neural network, network, and net will be used
interchangeably, unless noted otherwise.

Before proceeding any further, note that the n-input PTG(r) of Chapter One can be considered as a
form of a neural network with a "fixed" preprocessing (hidden) layer feeding into a single LTG in its
output layer, as was shown in Figure 1.3.4. Furthermore, Theorems 1.3.1 (extended to multivariate
functions) and 1.2.1 establish the "universal" realization capability of this architecture for continuous

functions of the form (assuming that the output unit has a linear activation

function) and for Boolean functions of the form , respectively. Here,


universality means that the approximation of an arbitrary continuous function can be made to any
degree of accuracy. Note that for continuous functions, the order r of the PTG may become very large.
On the other hand, for Boolean functions, universality means that the realization is exact. Here, r n is
sufficient. The following sections consider other more interesting neural net architectures and present
important results on their computational capabilities.

2.1 Some Preliminary Results on Neural Network Mapping Capabilities

In this section, some basic LTG network architectures are defined and bounds on the number of
arbitrary functions which they can realize are derived. The realization of both Boolean and multivariate
functions of the form f : Rn {0, 1} are considered.

2.1.1 Network Realization of Boolean Functions

A well known result from switching theory (e.g., see Kohavi, 1978) is that any switching function
(Boolean function) of n-variables can be realized using a two layer network with at most 2n−1 AND
gates in the first (hidden) layer and a single OR gate in the second (output) layer, assuming that the
inputs and their complements are available as inputs. This network is known as the AND-OR network
and is shown in Figure 2.1.1. The parity function is a Boolean function which requires the largest
network, i.e., the network with 2n−1 AND gates. For the case n = 4, the K-map of the parity function is
shown in Figure 2.1.2.
Figure 2.1.1. AND-OR network structure.

Figure 2.1.2. K-map for the 4-input parity function.

It was shown in Chapter One that a single LTG is a more powerful logic gate than a single AND (or
OR) gate. Thus, one may replace the hidden layer AND gates in Figure 2.1.1 with LTG's and still retain
the universal logic property of the AND-OR network [The universality of a properly interconnected
network of simple threshold gates was first noted in a classic paper by McCulloch and Pitts (1943)].
The resulting net with the LTG's does not require the complements of the input variables as inputs, as
for the AND-OR net. This LTG net will be referred to as a threshold-OR net. Given the same switching
function, the threshold-OR net may require a smaller number of gates compared to an AND-OR net.
The parity function, though, is an example where the required number of AND gates is equal to the
number of threshold gates in both nets; i.e., 2n−1 gates. Note that a fewer number of hidden LTG's than
AND gates are needed if the LTG net employs an LTG for the output unit. However, it can be shown

(see Section 2.2.1) that hidden LTG's would still be necessary for realizing arbitrary Boolean
functions in the limit of large n. Another interesting result is that a two hidden layer net with

LTG's is necessary for realizing arbitrary Boolean functions (see Section 2.2.2 for details).

Here, LTG's are sufficient.

The synthesis of threshold-OR and other LTG nets for the realization of arbitrary switching functions
was extensively studied in the sixties and early seventies. A recommended brief introduction to this
subject appears in the book by Kohavi (1978). In fact, several books and Ph.D. dissertations have been
written on threshold logic networks and their synthesis (refer to the introduction of Chapter One for
references). Here, we give only a simple illustrative example employing the K-map technique (the K-
map was introduced in Section 1.1.1) to realize the Boolean function f(x) in Figure 2.1.3(a). Figure
2.1.3(b) shows one possible decomposition of f(x) into a minimal number of threshold patterns (single
LTG realizable patterns) f1(x) and f2(x). The corresponding architecture for the threshold-OR net
realization is depicted in Figure 2.1.3(c). This K-map-based synthesis technique may be extended to
multiple output nets (see Problem 2.1.2), but is only practical for .

(a) (b) (c)

Figure 2.1.3. Threshold-OR realization of a 3-input switching function f(x): (a) K-map for f(x); (b) K-
map decomposition of f(x) into two threshold functions; and (c) Threshold-OR realization of f(x).

The following theorem establishes an upper bound on the size of a feedforward net of LTG's for
realizing arbitrary, partially specified switching functions.

Theorem 2.1.1: Consider a partially specified switching function f(x) defined on a set of m arbitrary
points in {0, 1}n, with m 2n. Then, a two layer net of LTG's employing m LTG's in its hidden layer,
feeding into a single output LTG, is sufficient to realize f(x).

Proof of Theorem 2.1.1:

This theorem can be viewed as a corollary of Theorem 1.4.1 of Chapter One (see Problem 2.1.2).

Theorem 2.1.1 can be easily extended to the case of multiple output switching functions of the form

. Here, the worst case scenario is to duplicate the above network L times
where L is the number of output functions! This leads to a sufficient LTG net realization having mL
LTG's in its hidden layer and L LTG's in its output layer.

2.1.2 Bounds on the Number of Functions Realizable by a Feedforward Network of LTG's

Consider the following question: How many functions f : Rn {0, 1} defined on m arbitrary points in Rn
are realizable by a layered net of k LTG's? In the following, this question is answered for three
different network architectures. First, a single hidden layer feedforward network, second, an arbitrarily
interconnected net with no feedback, and finally a two hidden layer feedforward network. All three nets
are assumed to be fully interconnected.
Consider a feedforward net with k LTG's in its hidden layer, all feeding to a single LTG in the output
layer as shown in Figure 2.1.4. We can show that for m arbitrary points in Rn, with m 3n + 2 and n 2,
this net can realize at most F1(n, m, k) functions, with F1 given by (Winder, 1963)

(2.1.1)

Figure 2.1.4. A two layer feedforward net with k hidden LTG's feeding into a single output LTG.

Proof:

Each of the k LTG's can realize at most (Winder, 1963; Also, recall Problem 1.3.3)

functions (dichotomies), and the final LTG can realize at most

functions. Therefore, the network can realize no more than

or

functions.

Another interesting type of network is the generally, fully interconnected net of k LTG's with no
feedback shown in Figure 2.1.5. This type of network can realize at most G(n, m, k) functions with G
given by
(2.1.2)

Figure 2.1.5. Generally fully interconnected net of k LTG's with no feedback.

Proof:

Since there is no feedback, the gates (LTG's) can be ordered as first gate, second gate, ..., kth gate, so
that the jth gate only receives inputs from the n independent variables and the j − 1 gates labeled 1,
2, ..., j − 1 (see Figure 2.1.5). The jth gate can then realize at most

functions. The total network can realize at most

functions, which is less than

This last result can be simplified to


Similarly, it can be shown that the feedforward net shown in Figure 2.1.6 with two hidden layers of
LTG's each, and with a single output LTG, is capable of realizing at most

(2.1.3)

functions.

Figure 2.1.6. A three layer LTG net having units in each of the two hidden layers and a single output
LTG.

Similar bounds can be derived for points in general position by simply replacing the bound on
the number of realizable functions by the tighter bound for input units (units receiving direct
connections from all components of the input vector), since the statistical capacity of a single d-input
LTG is 2(d + 1) points, for large d (refer to Section 1.3.3).

2.2 Necessary Bounds on the Size of LTG Networks

In this section, we derive necessary lower bounds on the number of gates, k, in a network of LTG's for
realizing any function f : Rn {0, 1} of m points arbitrarily chosen in Rn. For the case of functions
defined on m points in general position, similar bounds are also derived. Again, we consider the three
network architectures of Section 2.1.2.

2.2.1 Two Layer Feedforward Networks

Case 1: m arbitrary points

Consider the function f : Rn {0, 1} defined on m arbitrary points in Rn. Here, we show that a two layer

feedforward net with less than LTG's in its hidden layer is not sufficient to realize such an
arbitrary function (in the limit of large n). Recalling Equation (2.1.1) and requiring that F1(n, m, k) be
larger than or equal to the total number of possible binary functions of m points, we get (assuming
m 3n + 2 and n 2)

(2.2.1)

We may solve for k as a function of m and n. Taking the logarithm (base 2) of Equation (2.2.1) and
rearranging terms, we get

(2.2.2)

By employing Stirling's approximation , we can write

where a large n is assumed. Similarly, we employ the approximation

. Now, Equation (2.2.2) reduces to

(2.2.3)

with and . Equation (2.2.3) gives a necessary condition on the size of the net for
realizing any function of m arbitrary n-dimensional points. It is interesting to note the usefulness of this

bound by comparing it to the sufficient net constructed by Baum (1988), which requires
hidden LTG's.

As a special case, we may consider an arbitrary completely specified (m = 2n) Boolean function with

. Here, Equation (2.2.3) with m = 2n gives


(2.2.4)

which means that as n , an infinitely large net is required. A limiting case of Equation (2.2.3) is for

, which leads to the bound

(2.2.5)

Case 2: m points in general position

Recalling Equation (1.3.5) and the discussion in Section 1.3.3 on the statistical capacity of a single
LTG, one can determine an upper bound, FGP, on the number of possible functions (dichotomies) on m
points in general position in Rn, for a two layer feedforward net with k hidden units as

(2.2.6)

for m 2n and n . The first term in the right hand side of Equation (2.2.6) represents the total number of
functions realizable by the k hidden layer LTG's, where each LTG is capable of realizing 22(n+1)

functions. On the other hand, the term in Equation (2.2.6) represents an upper bound on the
number of functions realized by the output LTG. It assumes that the hidden layer transforms the m
points in general position in Rn to points which are not necessarily in general position in the hidden
space, Rk.

For the net to be able to realize any one of the above functions, its hidden layer size k must satisfy

(2.2.7)

or

(2.2.8)

This bound is tight for m close to 2n, it gives k = 1, which is the optimal net (recall, that any dichotomy
of m points in general position in Rn has a probability approaching one to be realized by a single LTG
as long as m 2n for large n). Note that when there is only a single LTG in the hidden layer, the output
LTG becomes redundant.
Equation (2.2.8) agrees with the early experimental results reported by Widrow and Angell (1962).

Also, it is interesting to note that in the limit as , Equation (2.2.8) gives a lower bound on k,
which is equal to one half the number of hidden LTG's of the optimal net reported by Baum (1988). It
is important to note that the bounds on k derived using the above approach are relatively tighter for the
m points in the general position case than for the case of m points in arbitrary position. This is because,
we are using the actual statistical capacity of an LTG for the general position case, as opposed to the
upper bound on the number of dichotomies being used for the arbitrary position case. This observation
is also valid for the bounds derived in the remainder of this section.

2.2.2 Three Layer Feedforward Networks

Case 1: m arbitrary points

Consider a two hidden layer net having (k is even) LTG's in each of its hidden layers and a single
LTG in its output layer. From Equation (2.1.3) the net is capable of realizing at most

arbitrary functions. Following earlier derivations, we can bound the necessary net size by

(2.2.9)

for m 3n and n . Next, taking the logarithm of both sides of Equation (2.2.9) and employing Sterling's
approximation gives

(2.2.1
0)

which can be solved for k as

(2.2.11)

Assuming that m >> n2, Equation (2.2.11) can be approximated as

(2.2.12)
For the special case of arbitrary Boolean functions with m = 2n, Equation (2.2.12) gives a lower bound

of hidden LTG's. An upper bound of on the number of hidden LTG's was reported by
Muroga (1959).

Case 2: m points in general position

Here, we start from the relation

(2.2.13)

which assumes that k is large and that the two hidden layer mappings preserve the general position

property of the m points. Now, in the limit of and n , Equation (2.2.13) can be solved for k to
give

(2.2.14)

2.2.3 Generally Interconnected Networks with no Feedback

It is left as an exercise for the reader to verify that the necessary size of an arbitrarily fully
interconnected network of LTG's (with no feedback) for realizing any function f : Rn {0, 1} defined on
m arbitrary points is given by (see Problem 2.2.3)

(2.2.15)

and for points in general position

(2.2.16)

It is of interest to compare Equations (2.2.12) and (2.2.14) with Equations (2.2.15) and (2.2.16),
respectively. Note that for these two seemingly different network architectures, the number of
necessary LTG's for realizing arbitrary functions is of the same order. This agrees with the results of
Baum (1988), who showed that these bounds are of the same order for any layered feedforward net
with two or more hidden layers and the same number of units, irrespective of the number of layers
used. This suggests that if one were able to compute an arbitrary function using a two hidden layer net

with only units, there would not be much to gain, in terms of random or arbitrary function
realization capability, by using more than two hidden layers! Also, comparing Equations (2.2.3),
(2.2.12) and (2.2.15) [or Equations (2.2.8), (2.2.14), and (2.2.16)] shows that when the size of the
training set is much larger than the dimension of the input exemplars, then networks with two or more
hidden layers may require substantially fewer units than networks with a single hidden layer.

In practice, the actual points (patterns) that we want to discriminate between are not arbitrary or
random; rather, they are likely to have natural regularities and redundancies. This may make them
easier to realize with networks having substantially smaller size than these enumerational statistics
would indicate. For convenience, the results of this section are summarized in Table 2.2.1.
NETWORK ARCHITECTURE LOWER BOUNDS ON THE SIZE OF AN LTG NET
POINTS IN GENERAL
ARBITRARY POINTS
POSITION

One hidden layer


feedforward net with k
hidden units

Two hidden layer

feedforward net with units


in each layer

Generally interconnected net


with k units (no feedback)

Table 2.2.1. Lower bounds on the size of a net of LTG's for realizing any function of
m points, f : Rn {0, 1}, in the limit of large n.

2.3 Approximation Capabilities of Feedforward Neural Networks for Continuous


Functions

This section summarizes some fundamental results, in the form of theorems, on continuous function
approximation capabilities of feedforward nets. The main result is that a two layer feedforward net with
a sufficient number of hidden units, of the sigmoidal activation type, and a single linear output unit is
capable of approximating any continuous function f : Rn R to any desired accuracy. Before formally
stating the above result, we consider some early observations on the implications of a classical theorem
on function approximation, Kolmogorov's theorem, which motivates the use of layered feedforward
nets as function approximators.

2.3.1 Kolmogorov's Theorem

It has been suggested (Hecht-Nielsen, 1987 and 1990; Lippmann, 1987; Spreecher, 1993) that
Kolmogorov's theorem concerning the realization of arbitrary multivariate functions, provides
theoretical support for neural networks that implement such functions.

Theorem 2.3.1 (Kolmogorov, 1957): Any continuous real-valued functions f (x1, x2, ..., xn) defined on
[0, 1]n, , can be represented in the form

f(x1, x2, ..., xn) = (2.3.1)

where the gj's are properly chosen continuous functions of one variable, and the ij's are continuous
monotonically increasing functions independent of f.
The basic idea in Kolmogorov's theorem is captured in the network architecture of Figure 2.3.1, where
a universal transformation M maps Rn into several uni-dimensional transformations. The theorem states
that one can express a continuous multivariate function on a compact set in terms of sums and
compositions of a finite number of single variable functions.

Figure 2.3.1. Network representation of Kolmogorov's theorem.

Others, such as Girosi and Poggio (1989), have criticized this interpretation of Kolmogorov's theorem
as irrelevant to neural networks by pointing out that the φ ij functions are highly non-smooth and the

functions gj are not parameterized. On the other hand, K rková (1992) supported the relevance of this
theorem to neural nets by arguing that non-smooth functions can be approximated as sums of infinite
series of smooth functions, thus one should be able to approximately implement φ ij and gj with
parameterized networks. More recently, Lin and Unbehauen (1993) argued that an "approximate"
implementation of gi does not, in general, deliver an approximate implementation of the original
function f(x). As this debate continues, the importance of Kolmogorov's theorem might not be in its
direct application to proving the universality of neural nets as function approximators, in as much as it
points to the feasibility of using parallel and layered network structures for multivariate function
mappings.

2.3.2 Single Hidden Layer Neural Networks are Universal Approximators

Rigorous mathematical proofs for the universality of feedforward layered neural nets employing
continuous sigmoid type, as well as other more general, activation units were given, independently, by
Cybenko (1989), Hornik et al. (1989), and Funahashi (1989). Cybenko's proof is distinguished by being
mathematically concise and elegant [it is based on the Hahn-Banach theorem (Luenberger, 1969)]. The
following is the statement of Cybenko's theorem (the reader is referred to the original paper by
Cybenko (1989) for the proof).

Theorem 2.3.2 (Cybenko, 1989): Let ϕ be any continuous sigmoid type function (e.g.,

). Then, given any continuous real-valued function f on [0, 1]n (or any other
compact subset of Rn) and ε 0, there exists vectors w1, w2, ..., wN, , and and a parameterized function
G(, w, , ): [0, 1]n R such that

| G(x, w, , ) − f(x) | ε for all x

where
G(x, w, , ) = (2.3.2)

and wj Rn, j , j R, w = (w1, w2, ..., wN), = (1, 2, ..., N), and = (1, 2, ..., N).

Hornik et al. (1989) [employing the Stone-Weierstrass Theorem (Rudin, 1964)] and Funahashi (1989)
[using an integral formula presented by Irie and Miyake (1988)] independently proved similar theorems
stating that a one hidden layer feedforward neural network is capable of approximating uniformly any
continuous multivariate function, to any desired degree of accuracy. This implies that any failure of a
function mapping by a multilayer network must arise from inadequate choice of parameters (i.e., poor
choices for w1, w2, ..., wN, , and ) or an insufficient number of hidden nodes. Hornik et al. (1990)
proved another important result relating to the approximation capability of multilayer feedforward
neural nets employing sigmoidal hidden unit activations. They showed that these networks can
approximate not only an unknown function, but also approximate its derivative. In fact, Hornik et al.
(1990) also showed that these networks can approximate functions that are not differentiable in the
classical sense, but possess a generalized derivative as in the case of piecewise differentiable functions.

Using a theorem by Sun and Cheney (1992), Light (1992a) extended Cybenko's results to any
continuous function f on Rn and showed that signed integer weights and thresholds are sufficient for
accurate approximation. In another version of the theorem, Light shows that the sigmoid can be
replaced by any continuous function ϑ on R satisfying:

(2.3.3)

where n is odd and n 3. A similar result can be established for n even. Examples of activation functions
satisfying the above conditions are given by the family of functions

(2.3.4)

Note that the cosine term is the Chebyshev polynomial of degree n. Figure 2.3.2 shows two plots of this
activation function for n = 3 and n = 7, respectively.
(a) (b)

Figure 2.3.2. Activation function n() for the case (a) n = 3, and (b) n = 7.

The universality of single hidden layer nets with units having non-sigmoid activation functions was
formally proved by Stinchcombe and White (1989). Baldi (1991) showed that a wide class of
continuous multivariate functions can be approximated by a weighted sum of bell-shaped functions
(multivariate Bernstein polynomials); i.e., a single hidden layer net with bell-shaped activations for its
hidden units and a single linear output unit is a possible approximator of functions f : Rn R. Hornik
(1991) proved that a sufficient condition for universal approximation can be obtained by using
continuous, bounded, and nonconstant hidden unit activation functions. Recently, Leshno et al. (1993,
see also Hornik, 1993) have extended these results by showing that the above neural network with
locally bounded piecewise continuous activation functions for hidden units is a universal approximator
if and only if the network's activation function is not a polynomial. Ito (1991) showed that any function
belonging to the class of rapidly decreasing continuous functions in Rn (i.e., functions f(x) satisfying

for any kj 0) can be approximated arbitrarily well by a two layer


architecture with a finite number of LTGs in the hidden layer. Here, the requirement of rapid decrease
in f is not necessary and can be weakened.

2.3.3 Single Hidden Layer Neural Networks are Universal Classifiers

Theorem 2.3.2 may also be extended to classifier-type mappings (Cybenko, 1989) of the form

f(x) = j iff x Pj (2.3.5)

where f :An {1, 2, ..., k}. An is a compact (closed and bounded) subset of Rn, and P1, P2, ..., Pk partition

An into k disjoint measurable subsets, i.e., and is emoty for i j.

Thus, a single hidden layer net with sigmoidal activation units and a single linear output unit is a
universal classifier. This result was confirmed empirically by Huang and Lippmann (1988) on several
examples including the ones shown in Figure 2.3.3 for n = 2 and k = 2 (i.e., two-class problems in a
two-dimensional space).
Figure 2.3.3. Ten complex decision regions formed by a neural net classifier with a single hidden layer.
(Adapted from W. Y. Huang and R. P. Lippmann, 1988, with permission of the American Institute of
Physics.)

We conclude this section by noting that in Equation (2.3.5), the class label is explicitly generated as the
output of a linear unit. This representation of class labels via quantized levels of a single linear output
unit, although useful for theoretical considerations, is not practical; it imposes unnecessary constraints
on the hidden layer, which in turn, leads to a large number of hidden units for the realization of
complex mappings. In practical implementations, a local encoding of classes is more commonly used
(Rumelhart et al., 1986a). This relaxes the constraints on the hidden layer mapping by adding several
output units, each of which is responsible for representing a unique class. Here, LTG's or sigmoid type
units may be used as output units.

2.4 Computational Effectiveness of Neural Networks

2.4.1 Algorithmic Complexity

A general way of looking at the efficiency of embedding a problem in a neural network comes from a
computational complexity point of view (Abu-Mostafa, 1986a and 1986b). Solving a problem on a
sequential computer requires a certain number of steps (time complexity), memory size (space
complexity), and length of algorithm (Kolmogorov complexity). In a neural net simulation the number
of computations is a measure of time complexity, the number of units is a measure of space
complexity, and the number of weights (degrees of freedom) where the algorithm is "stored" is a
measure of Kolmogorov complexity. In formulating neural network solutions for practical problems,
we seek to minimize, simultaneously, the resulting time, space, and Kolmogorov complexities of the
network. If a given problem is very demanding in terms of space complexity, then the required network
size is large and thus the number of weights is large, even if the Kolmogorov complexity of the
algorithm is very modest! This spells inefficiency: neural net solutions of problems with short
algorithms and high-space complexity are very inefficient. The same is true for problems where time
complexity is very demanding, while the other complexities may not be.
The above complexity discussion leads us to identify certain problems which best match the
computational characteristics of artificial neural networks. Problems which require a very long
algorithm if run on sequential machines make the most use of neural nets, since the capacity of the net
grows faster than the number of units. Such problems are called random problems (Abu-Mostafa,
1986b). Examples are pattern recognition problems in natural environments and AI problems requiring
huge data bases. These problems make the most use of the large capacity of neural nets. It is interesting
to note that humans are very good at such random problems but not at structured ones. The other
interesting property of random problems is that we do not have explicit algorithms for solving them. A
neural net can develop one by learning from examples, as will be shown in Chapters 5 and 6.

At this point, one may recall the results of Chapter One, based on Cover's concept of extreme
patterns/inequalities, which point to the effectiveness of threshold units in loading a large set of labeled
patterns (data/prototypes) by only learning on extreme patterns. These results and those presented here
show that neural networks can be very efficient classifiers.

2.4.2 Computation Energy

For any computational device, be it biologically-based, microprocessor-based, etc., the cost-per-


computation or computational energy of the device can be directly measured in terms of the energy
required (in units of Joules) to perform the computation. From this point of view, we may then compare
computational devices in terms of the total energy required to solve a given problem.

As technology evolved, it always moved in the direction of lower energy per unit computation in order
to allow for more computations per unit time in a practically sized computing machine (note the trend
from vacuum tubes, to transistors, to integrated circuits). A typical microprocessor chip can perform
about 10 million operations per second and uses about 1 watt of power. In round numbers, it costs
about 10−7 joules to do one operation on such a chip. The ultimate silicon technology we can envision
today will dissipate on the order of 10−9 joules of energy for each operation at the single chip level.

The brain, on the other hand, has about 1015 synapses. A nerve pulse arrives at each synapse on the
average of 10 times per second. So, roughly, the brain accomplishes 1016 complex operations per
second. Since the power dissipation is a few watts, each operation costs only 10−16 joules! The brain is
more efficient, by a factor of 10 million, than the best digital technology that we can hope to attain.

One reason for the inefficiency in computation energy is due to the way devices are used in a system.
In a typical silicon implementation, we switch about 104 transistors to do one operation. Using the
physics of the device (e.g., analog computing in a single transistor) can save us these four orders of
magnitude in computation energy. For example, analog addition is done for free at a node, and
nonlinear activations (sigmoids) can be realized using a single MOS transistor operating in its
subthreshold region. Similarly, multiplication may be performed with a small MOS transistor circuit.
Therefore, one can very efficiently realize analog artificial neurons with existing technology employing
device physics (Hopfield, 1990; Mead, 1991). Carver Mead (1991) wrote: "We pay a factor of 104 for
taking all of the beautiful physics that is built into these transistors, mashing it down into a one or a
zero, and then painfully building it back, with AND and OR gates to reinvent the multiply. We then
string together those multiplications and additions to get an exponential. But we neglected a basic fact:
the transistor does an exponential all by itself."

Based on the above, the type of computations required by neural networks may lend themselves nicely
to efficient implementation with current analog VLSI technology; there is a potential for effectively
and efficiently realizing very large analog neural nets on single chips. Analog optical implementations
of neural networks could also have a competitive advantage over digital implementations. Optics have
the unique advantage that beams of light can cross each other without affecting their information
contents. Thus, optical interconnections may be used in conjunction with electronic VLSI chips to
efficiently implement very large and richly interconnected neural networks. The richness of optical
device physics, such as holograms, analog spatial light modulators, optical filters, and fourier lenses
suggests that optical implementation of analog neural nets could be very efficient. [For examples of
very large optical neural networks, the reader is referred to the works of Paek and Psaltis (1987), Abu-
Mostafa and Psaltis (1987), and Anderson and Erie (1987).]
2.5 Summary

Lower bounds on the size of multilayer feedforward neural networks for the realization of arbitrary
dichotomies of points are derived. It is found that networks with two or more hidden layers are
potentially more size-efficient than networks with a single hidden layer. However, the derived bounds
suggest that no matter what network architecture is used (assuming no feedback), the size of such

networks must always be on the order or larger for the realization of arbitrary or
random functions of m points in Rn, in the limit m >> n2. Fortunately, though, the functions
encountered in practice are likely to have natural regularities and redundancies which make them easier
to realize with substantially smaller networks than these bounds would indicate.

Single hidden layer feedforward neural networks are universal approximators for arbitrary multivariate
continuous functions. These networks are also capable of implementing arbitrarily complex
dichotomies; thus, they are suitable as pattern classifiers. However, it is crucial that the parameters of
these networks be carefully chosen in order to exploit the full function approximation potential of these
networks. In the next chapter, we explore learning algorithms which may be used to adaptively
discover optimal values for these parameters.

Finally, we find that from an algorithmic complexity point of view, neural networks are best fit for
solving random problems such as pattern recognition problems in noisy environments. We also find
such networks appealing from a computation energy point of view when implemented in analog VLSI
and/or optical technologies that utilize the properties of device physics.

Problems:

2.1.1 Use the K-map-based threshold-OR synthesis technique illustrated in Figure 2.1.3 to identify all
possible decompositions of the Boolean function f(x1, x2, x3) = x1x3 + x2x3 + into the ORing of
two threshold functions (Hint: recall the admissible K-map threshold patterns of Figure 1.1.5).

2.1.2 Find a minimal threshold-OR network realization for the switching function

given by the K-maps in Figure P2.1.2.

Figure P2.1.2. A three input, two output switching function.


2.1.3 Employ Theorem 1.4.1 and the equivalence of the two nets in Figures 1.3.4 and 1.2.1, for

, to show that a two layer LTG net with m LTG's in the hidden layer, feeding into a single
output LTG is sufficient for realizing any Boolean function of m points in {0,1}n. Is the requirement of
full interconnectivity between the input vector and the hidden LTG layer necessary? Why?

*2.2.1 Estimate the maximum number of functions , defined on m points in general


position, that can be realized using the generally interconnected LTG network shown in Figure 2.1.5.

2.2.2 Derive the bound on k in Equation (2.2.14).

* 2.2.3 Derive the bound on k in Equation (2.2.15) [Hint: Use Equation (2.1.2)].

2.2.4 Consider an arbitrarily fully interconnected network of LTG's with no feedback. Show that this

network must have more than LTG's for realizing arbitrary, completely specified Boolean
functions of n-variables.

* 2.2.5 Derive the bound on k in Equation (2.2.16). Hint: Use the result of Problem 2.2.1.

2.3.1 Plot Equation (2.3.4) for n = 9.

* 2.3.2 Show that the activation in Equation (2.3.4) with n = 3 satisfies the three conditions in Equation
(2.3.3).
3. LEARNING RULES
3.0 Introduction

One of the most significant attributes of a neural network is its ability to learn by interacting with its
environment or with an information source. Learning in a neural network is normally accomplished
through an adaptive procedure, known as a learning rule or algorithm whereby the weights of the
network are incrementally adjusted so as to improve a predefined performance measure over time.

In the context of artificial neural networks, the process of learning is best viewed as an optimization
process. More precisely, the learning process can be viewed as "search" in a multi-dimensional
parameter (weight) space for a solution, which gradually optimizes a prespecified objective (criterion)
function. This view is adopted in this chapter, and it allows us to unify a wide range of existing
learning rules, which otherwise would have looked more like a diverse variety of learning procedures.

This chapter presents a number of basic learning rules for supervised, reinforced, and unsupervised
learning tasks. In supervised learning (also known as learning with a teacher or associative learning),
each input pattern/signal received from the environment is associated with a specific desired target
pattern. Usually, the weights are synthesized gradually, and at each step of the learning process they are
updated so that the error between the network's output and a corresponding desired target is reduced.
On the other hand, unsupervised learning involves the clustering of (or the detection of similarities
among) unlabeled patterns of a given training set. The idea here is to optimize (maximize or minimize)
some criterion or performance function defined in terms of the output activity of the units in the
network. Here, the weights and the outputs of the network are usually expected to converge to
representations which capture the statistical regularities of the input data. Reinforcement learning
involves updating the network's weights in response to an "evaluative" teacher signal; this differs from
supervised learning, where the teacher signal is the "correct answer". Reinforcement learning rules may
be viewed as stochastic search mechanisms that attempt to maximize the probability of positive
external reinforcement for a given training set.

In most cases, these learning rules are presented in the basic form appropriate for single unit training.
Exceptions are cases involving unsupervised (competitive or feature mapping) learning schemes where
an essential competition mechanism necessitates the use of multiple units. For such cases, simple single
layer architectures are assumed. Later chapters of this book (Chapters 5, 6, and 7) extend some of the
learning rules discussed here to networks with multiple units and multiple layers.

3.1 Supervised Learning in a Single Unit Setting

Supervised learning is treated first. Here, two groups of rules are discussed: Error correction rules and
gradient descent-based rules. By the end of this section it will be established that all of these learning
rules can be systematically derived as minimizers of an appropriate criterion function.

3.1.1 Error Correction Rules

Error correction rules were initially proposed as ad hoc rules for single unit training. These rules
essentially drive the output error of a given unit to zero. We start with the classical perceptron learning
rule and give a proof for its convergence. Then, other error correction rules such as Mays' rule and the
-LMS rule are covered. Throughout this section, an attempt is made to point out criterion functions that
are minimized by using each rule. We will also cast these learning rules as relaxation rules, thus
unifying them with the other gradient-based search rules, such as the ones presented in Section 3.1.2.

Perceptron Learning Rule


Consider the following version of a linear threshold gate shown in Figure 3.1.1. We will refer to it as
the perceptron. The perceptron maps an input vector x = [x1 x2 ... xn+1]T to a bipolar binary output y,
and thus it may be viewed as a simple two-class classifier. The input signal xn+1 is usually set to 1 and
plays the role of a bias to the perceptron. We will denote by w the vector w = [w1 w2 ... wn+1]T Rn+1
consisting of the free parameters (weights) of the perceptron. The input/output relation for the
perceptron is given by y = sgn(xTw), where sgn is the "sign" function which returns +1 or −1 depending
on whether the sign of its scalar argument is positive or negative, respectively.

Figure 3.1.1. The perceptron computational unit.

Assume we are training the above perceptron to load (learn) the training pairs: {x1, d1}, {x2, d2}, ...,

{xm, dm} where is the kth input vector and , k = 1, 2, ..., m, is the desired
target for the kth input vector (usually, the order of these training pairs is random). The entire collection
of these pairs is called the training set. The goal, then, is to design a perceptron such that for each input
vector xk of the training set, the perceptron output yk matches the desired target dk; that is, we require
yk = sgn(wTxk) = dk, for each k = 1, 2, ..., m. In this case, we say that the perceptron correctly classifies
the training set. Of course, "designing" an appropriate perceptron to correctly classify the training set
amounts to determining a weight vector w* such that the following relations are satisfied:

(3.1.1)

Recall that the set of all x which satisfy xTw* = 0 defines a hyperplane in Rn. Thus, in the context of the
above discussion, finding a solution vector w* to Equation (3.1.1) is equivalent to finding a separating
hyperplane which correctly classifies all vectors xk, k = 1, 2, ..., m. In other words, we desire a
hyperplane xTw* = 0 which partitions the input space into two distinct regions, one containing all
points xk with dk = +1 and the other region containing all points xk with dk = −1.

One possible incremental method for arriving at a solution w* is to invoke the perceptron learning rule
(Rosenblatt, 1962):

k = 1, 2, ... (3.1.2)

where is a positive constant, called the learning rate. The incremental learning process given in
Equation (3.1.2) proceeds as follows. First, an initial weight vector w1 is selected (usually at random)
to begin the process. Then the m pairs {xk, dk} of the training set are used to successively update the
weight vector until (hopefully) a solution w* is found which correctly classifies the training set. This
process of sequentially presenting the training patterns is usually referred to as "cycling" through the
training set, and a complete presentation of the m training pairs is referred to as a cycle (or pass)
through the training set. In general, more than one cycle through the training set is required to
determine an appropriate solution vector. Hence, in Equation (3.1.2), the superscript k in wk refers to
the iteration number. On the other hand, the superscript k in xk (and dk) is the label of the training pair
presented at the kth iteration. To be more precise, if the number of training pairs, m, is finite, then the
superscripts in xk and dk should be replaced by [(k − 1) mod m] + 1. Here, a mod b returns the
remainder of the division of a by b (e.g., 5 mod 8 = 5, 8 mod 8 = 0, and 19 mod 8 = 3). This
observation is valid for all incremental learning rules presented in this chapter.

Notice that for = 0.5, the perceptron learning rule can be written as:

(3.1.3)

where

(3.1.4)

That is, a correction is done if and only if a misclassification, indicated by

(3.1.5)

occurs. The addition of vector zk to wk in Equation (3.1.3) moves the weight vector directly toward and

perhaps across the hyperplane . The new inner product is larger than

by the amount of and the correction wk = wk+1 − wk is clearly moving wk in a good

direction, the direction of increasing , as can be seen from Figure 3.1.2. Thus, the perceptron
learning rule attempts to find a solution w* for the following system of inequalities

for k = 1, 2, ..., m (3.1.6)


Figure 3.1.2. Geometric representation of the perceptron learning rule with = 0.5.

In an analysis of any learning algorithm, an in particular the perceptron learning algorithm of Equation
(3.1.2), there are two main issues to consider: (1) The existence of solutions and (2) convergence of the
algorithm to the desired solutions (if they exist). In the case of the perceptron, it is clear that a solution
vector (i.e., a vector w* which correctly classifies the training set) exists if and only if the given
training set is linearly separable. Assuming, then, that the training set is linearly separable, we may
proceed to show that the perceptron learning rule converges to a solution (Novikoff, 1962; Ridgway,
1962; Nilsson, 1965) as follows. Let w* be any solution vector, so that

for k = 1, 2, ..., m (3.1.7)

Then, from Equation (3.1.3), if the kth pattern is misclassified we may write

wk+1 − α w* = wk − α w* + zk (3.1.8)

where is a positive scale factor, and hence

(3.1.9)

Since zk is misclassified, we have , and thus

(3.1.10)

Now, let and ( is positive since ) and substitute


into Equation (3.1.10) to get

(3.1.11)

If we choose sufficiently large, in particular , we obtain

(3.1.12)

Thus, the square distance between wk and α w* is reduced by at least β 2 at each correction, and after k
corrections we may write Equation (3.1.12) as

(3.1.13)

It follows that the sequence of corrections must terminate after no more than k0 corrections, where
(3.1.14)

Therefore, if a solution exists, it is achieved in a finite number of iterations. When corrections cease,
the resulting weight vector must classify all the samples correctly since a correction occurs whenever a
sample is misclassified and since each sample appears infinitely often in the sequence. In general, a
linearly separable problem admits an infinite number of solutions. The perceptron learning rule in
Equation (3.1.2) converges to one of these solutions. This solution, though, is sensitive to the value of
the learning rate, , used and to the order of presentation of the training pairs. This sensitivity is
responsible for the varying quality of the perceptron generated separating surface observed in
simulations.

The bound on the number of corrections, k0, given by Equation (3.1.14) depends on the choice of the
initial weight vector w1. If w1 = 0, we get

or

(3.1.15)

Here, k0 is a function of the initially unknown solution weight vector w*. Therefore, Equation (3.1.15)
is of no help for predicting the maximum number of corrections. However, the denominator of
Equation (3.1.15) implies that the difficulty of the problem is essentially determined by the samples
most nearly orthogonal to the solution vector.

Generalizations of the Perceptron Learning Rule

The perceptron learning rule may be generalized to include a variable increment ρ k and a fixed,

positive margin b. This generalized learning rule updates the weight vector whenever fails
to exceed the margin b. Here, the algorithm for weight vector update is given by

(3.1.16)

The margin b is useful because it gives a dead-zone robustness to the decision boundary. That is, the
perceptron's decision hyperplane is constrained to lie in a region between the two classes such that
sufficient clearance is realized between this hyperplane and the extreme points (boundary patterns) of
the training set. This makes the perceptron robust with respect to noisy inputs. It can be shown (Duda
and Hart, 1973) that if the training set is linearly separable and if the following three conditions are
satisfied:
(3.1.17a)

(3.1.17b)

(3.1.17c)

(e.g., ρ k = or even ρ k = ρ k) then wk converges to a solution w* which satisfies for


i = 1, 2, ..., m. Furthermore, when k is fixed at a positive constant , this learning rule converges in finite
time.

Another variant of the perceptron learning rule is given by the "batch" update procedure

(3.1.18)

where Z(wk) is the set of patterns z misclassified by wk. Here, the weight vector change

is along the direction of the resultant vector of all misclassified patterns. In


general, this update procedure converges faster than the perceptron rule, but it requires more storage.

In the nonlinearly separable case, the above algorithms do not converge. Few theoretical results are
available on the behavior of these algorithms for nonlinearly separable problems [see Minsky and
Papert (1969) and Block and Levin (1970) for some preliminary results]. For example, it is known that
the length of w in the perceptron rule is bounded, i.e., tends to fluctuate near some limiting value ||w*||
(Efron, 1964). This information may be used to terminate the search for w*. Another approach is to
average the weight vectors near the fluctuation point w*. Butz (1967) proposed the use of a
reinforcement factor , 0 1, in the perceptron learning rule. This reinforcement places w in a region that
tends to minimize the probability of error for nonlinearly separable cases. Butz's rule is as follows

(3.1.19)

The Perceptron Criterion Function

It is interesting to see how the above error correction rules can be derived by a gradient descent on an
appropriate criterion (objective) function. For the perceptron, we may define the following criterion
function (Duda and Hart, 1973):
(3.1.20)

where Z(w) is the set of samples misclassified by w (i.e., zTw 0). Note that if Z(w) is empty then J(w)
= 0, otherwise J(w) > 0. Geometrically, J(w) is proportional to the sum of the distances from the
misclassified samples to the decision boundary. The smaller J is, the better the weight vector w will be.

Given this objective function J(w), we can incrementally improve the search point wk at each iteration
by sliding downhill on the surface defined by J(w) in w space. Specifically, we may use J to perform a
discrete gradient descent search, which updates wk so that a step is taken downhill in the "steepest"
direction along the search surface J(w) at wk. This can be achieved by making wk proportional to the
gradient of J at the present location wk, formally we may write

(3.1.21)

Here, the initial search point, w1, and the learning rate (step size) are to be specified by the user. We
will refer to Equation (3.1.21) as the steepest gradient descent search rule or, simply, gradient descent.
Next, substituting the gradient

(3.1.22)

in Equation (3.1.21), leads to the weight update rule

(3.1.23)

The learning rule given in Equation (3.1.23) is identical to the multiple-sample (batch) perceptron rule
of Equation (3.1.18). The original perceptron learning rule of Equation (3.1.3) can be thought of as an
"incremental" gradient descent search rule for minimizing the perceptron criterion function in Equation
(3.1.20). Following a similar procedure as in Equations (3.1.21) through (3.1.23), it can be shown that

(3.1.24)

is the appropriate criterion function for the modified perceptron rule in Equation (3.1.16).

Before moving on, we should note that the gradient of J in Equation (3.1.22) is not mathematically
precise. Due to the piecewise linear nature of J, sudden changes in the gradient of J occur every time

the perceptron output y goes through a transition at . Therefore, the gradient of J is not

defined at "transition" points w satisfying , k = 1, 2, ..., m. However, because of the


discrete nature of Equation (3.1.21), the likelihood of wk overlapping with one of these transition points
is negligible, and thus we may still express J as in Equation (3.1.22). The reader is referred to Problem
3.1.3 for further exploration into gradient descent on the perceptron criterion function.

Mays Learning Rule


The criterion functions in Equations (3.1.20) and (3.1.24) are by no means the only functions we can
construct that are minimized when w is a solution vector. For example, an alternative function is the
quadratic function

(3.1.25)

where b is a positive constant margin. Like the previous criterion functions, the function J(w) in
Equation (3.1.25) focuses attention on the misclassified samples. Its major difference is that its gradient
is continuous, whereas the gradient of the perceptron criterion function, with or without the use of
margin, is not. Unfortunately, the present function can be dominated by the input vectors with the

largest magnitudes. We may eliminate this undesirable effect by dividing by :

(3.1.26)

The gradient of J(w) in Equation (3.1.26) is given by

(3.1.27)

which, upon substituting in Equation (3.1.21), leads to the following learning rule

(3.1.28)

If we consider the incremental update version of Equation (3.1.28), we arrive at Mays' rule (Mays,
1964):

(3.1.29)

If the training set is linearly separable, Mays' rule converges in a finite number of iterations, for 0 < ρ
< 2 (Duda and Hart, 1973). In the case of a nonlinearly separable training set, the training procedure in

Equation (3.1.29) will never converge. To fix this problem a decreasing learning rate such as ρ k =
may be used to force convergence to some approximate separating surface (Duda and Singleton, 1964).

Widrow-Hoff (α -LMS) Learning Rule


Another example of an error correcting rule with a quadratic criterion function is the Widrow-Hoff rule
(Widrow and Hoff, 1960). This rule was originally used to train the linear unit, also known as the
adaptive linear combiner element (ADALINE), shown in Figure 3.1.3. In this case, the output of the
linear unit in response to the input xk is simply yk = (xk)Tw. The Widrow-Hoff rule was originally
proposed as an ad hoc rule, which embodies the so-called minimal disturbance principle. Later, it was
discovered (Widrow and Stearns, 1985) that this rule converges in the mean square to the solution w*
which corresponds to the least-mean-square (LMS) output error, if all input patterns are of the same
length (i.e., ||xk|| is the same for all k). Therefore, this rule is sometimes referred to as the -LMS rule
(the "" is used here to distinguish this rule from another very similar rule which is discussed in Section
3.1.2).

Figure 3.1.3. Adaptive linear combiner element (ADALINE).

The -LMS rule is given by

(3.1.30)

where dk R is the desired response, and > 0. Equation (3.1.30) is similar to the perceptron rule if one
sets , in Equation (3.1.2), as

(3.1.31)

Though, the error in Equation (3.1.30) is measured at the linear output; not after the nonlinearity as in
the perceptron. The constant α controls the stability and speed of convergence (Widrow and Stearns,
1985; Widrow and Lehr, 1990). If the input vectors are independent over time, stability is insured for
most practical purposes if 0 < α < 2.

As for May's rule, this rule is self-normalizing in the sense that the choice of does not depend on the
magnitude of the input vectors. Since the -LMS rule selects wk to be collinear with xk, the desired error
correction is achieved with a weight change of the smallest possible magnitude. Thus, when adapting to
learn a new training sample, the responses to previous training samples are minimally disturbed, on the
average. This is the basic idea behind the minimal disturbance principle on which the -LMS is founded.
Alternatively, one can show that the -LMS learning rule is a gradient descent minimizer of an
appropriate quadratic criterion function (see Problem 3.1.4).

3.1.2 Other Gradient Descent-Based Learning Rules


In the following, additional learning rules for single unit training are derived. These rules are
systematically derived by first defining an appropriate criterion function and then optimizing such a
function by an iterative gradient search procedure.

µ -LMS Learning Rule

The -LMS learning rule (Widrow and Hoff, 1960) represents the most analyzed and most applied
simple learning rule. It is also of special importance due to its possible extension to learning in multiple
unit neural nets. Therefore, special attention is given to this rule in this chapter. In the following, the
-LMS rule is described in the context of the linear unit in Figure 3.1.3.

Let

(3.1.32)

be the sum of squared error (SSE) criterion function, where

(3.1.33)

Now, using steepest gradient descent search to minimize J(w) in Equation (3.1.32) gives

(3.1.34)

The criterion function J(w) in Equation (3.1.32) is quadratic in the weights because of the linear
relation between yi and w. In fact, J(w) defines a convex hyperparaboloidal surface with a single
minimum w* (the global minimum). Therefore, if the positive constant is chosen sufficiently small, the
gradient descent search implemented by Equation (3.1.34) will asymptotically converge towards the
solution w*, regardless of the setting of the initial search point, w1. The learning rule in Equation
(3.1.34) is sometimes referred to as the "batch" LMS rule.

The incremental version of Equation (3.1.34), known as the µ -LMS or LMS rule, is given by

(3.1.35)

Note that this rule becomes identical to the α -LMS learning rule in Equation (3.1.30) upon setting as

(3.1.36)

Also, when the input vectors have the same length, as would be the case when x {−1, +1}n, then the
-LMS rule becomes identical to the -LMS rule. Since the α -LMS learning algorithm converges when
0 < α < 2, we can start from Equation (3.1.36) and calculate the required range on for ensuring the
convergence of the -LMS rule:

(3.1.37)

For input patterns independent over time and generated by a stationary process, convergence of the
mean of the weight vector, <wk> is ensured if the fixed learning rate is chosen to be smaller than

. (Widrow and Stearns, 1985. Also see Problem 4.3.8 in the next chapter for further

exploration.) In this case, <wk> approaches a solution w* as k . Here, " " signifies the "mean" or

expected value. Note that the bound is less restrictive than the one in Equation (3.1.37).

Horowitz and Senne (1981) showed that the bound on guarantees the convergence of w in

the mean square (i.e., as k ), for input patterns generated by a zero-mean


Gaussian process independent over time. It should be noted that convergence in the mean square
implies convergence in the mean; however, the converse is not necessarily true. The assumptions of
decorrelated patterns and stationarity are not necessary conditions for the convergence of -LMS
(Widrow et al., 1976; Farden, 1981). For example, Macchi and Eweda (1983) have a much stronger
result regarding convergence of the -LMS rule which is even valid when a finite number of successive
training patterns are strongly correlated.

In practical problems, m > n + 1; hence, it becomes impossible to satisfy all requirements (xk)Tw = dk,

k = 1, 2, ..., m. Therefore, Equation (3.1.35) never converges. Thus, for convergence, is set to > 0,
where 0 is a small positive constant. In applications such as linear filtering, though, the decreasing step
size is not very valuable, because it cannot accommodate non-stationarity in the input signal. Indeed,
wk will essentially stop changing for large k, which precludes the tracking of time variations. Thus, the
fixed increment (constant ) LMS learning rule has the advantage of limited memory, which enables it
to track time fluctuations in the input data.

When the learning rate is sufficiently small, the -LMS rule becomes a "good" approximation to the
gradient descent rule in Equation (3.1.34). This means that the weight vector wk will tend to move
towards the global minimum w* of the convex SSE criterion function. Next, we show that w* is given
by

w* = X†d (3.1.38)

where X = [x1 x2 ... xm], d = [d1 d2 ... dm]T, and X† = (XXT)−1X is the generalized inverse or pseudo-
inverse (Penrose, 1955) of X for m > n + 1.

The extreme points (minima and maxima) of the function J(w) are solutions to the equation

(3.1.39)
Therefore, any minimum of the SSE criterion function in Equation (3.1.32) must satisfy

(3.1.40)

Equation (3.1.40) can be rewritten as

X XT w = X d (3.1.41)

which for a nonsingular matrix X XT gives the solution in Equation (3.1.38), or explicitly

(3.1.42)

Recall that just because w* in Equation (3.1.42) satisfies the condition J(w*) = 0, this does not
guarantee that w* is a local minimum of the criterion function J. It does, however, considerably narrow
the choices in that such w* represents (in a local sense) either a point of minimum, maximum, or saddle
point of J. To verify that w* is actually a minimum of J(w), we may evaluate the second derivative or

Hessian matrix of J at w* and show that it is positive definite. This can be


readily achieved after noting that J is equal to the positive-definite matrix XXT. Thus, w* is a minimum
of J.

The LMS rule may also be applied to synthesize the weight vector, w, of a perceptron for solving two-
class classification problems. Here, one starts by training the linear unit in Figure 3.1.3 with the given
training pairs {xk, dk}, k = 1, 2, ..., m, using the LMS rule. During training the desired target dk is set to
+1 for one class and to for the other class. (In fact, any positive constant can be used as the target
for one class, and any negative constant can be used as the target for the other class). After
convergence of the learning process, the solution vector obtained may now be used in the perceptron
for classification. Due to the thresholding nonlinearity in the perceptron, the output of the classifier will
now be properly restricted to the set {−1, +1}.

When used as a perceptron weight vector, the minimum SSE solution in Equation (3.1.42) does not
generally minimize the perceptron classification error rate. This should not be surprising, since the SSE
criterion function is not designed to constrain its minimum inside the linearly separable solution region.
Therefore, this solution does not necessarily represent a linear separable solution, even when the
training set is linearly separable (this is further explored in Section 3.1.5). However, when the training
set is nonlinearly separable, the solution arrived at may still be a useful approximation. Therefore, by
employing the LMS rule for perceptron training, linear separability is sacrificed for good compromise
performance on both separable and nonseparable problems.

The -LMS as a Stochastic Process

Stochastic approximation theory may be employed as an alternative to the deterministic gradient


descent analysis presented thus far. It has the advantage of naturally arriving at a learning rate schedule
ρ k for asymptotic convergence in the mean square. Here, one starts with the mean-square error (MSE)
criterion function:

(3.1.43)
where denotes the mean (expectation) over all training vectors. Now, one may compute the
gradient of J as

(3.1.44)

which upon setting to zero, allows us to find the minimum w* of J in Equation (3.1.43) as the solution
of

which gives

(3.1.45)

where and . Note that the expected value of a vector or a matrix is found by
taking the expected values of its components. We refer to C as the autocorrelation matrix of the input
vectors and to P as the cross-correlation vector between the input vector x and its associated desired
target d (more to follow on the properties of C in Section 3.3.1). In Equation (3.1.45), the determinant
of C, |C|, is assumed different from zero. The solution w* in Equation (3.1.45) is sometimes called the
Wiener weight vector (Widrow and Stearns, 1985). It represents the minimum MSE solution, also
known as the least mean square (LMS) solution.

It is interesting to note here, the close relation between the minimum SSE solution in Equation (3.1.42)
and the LMS or minimum MSE solution in Equation (3.1.45). In fact, one can show that when the size
of the training set m is large, then the minimum SSE solution converges to the minimum MSE solution.

First, let us express XXT as the sum of vector outer products We can also rewrite Xd

as . This representation allows us to express Equation (3.1.42) as

Now, multiplying the right hand side of the above equation by allows us to express it as
Finally, if m is large, the averages and become very good
approximations of the expectations C = <xxT> and P = <dx>, respectively. Thus, we have established
the equivalence of the minimum SSE and minimum MSE for a large training set.

Next, in order to minimize the MSE criterion, one may employ a gradient descent procedure, where,
instead of the expected gradient in Equation (3.1.44), the instantaneous gradient [(xk)T wk − dk] xk is
used. Here, at each learning step the input vector x is drawn at random. This leads to the stochastic
process:

(3.1.46)

which is the same as the µ -LMS rule in Equation (3.1.35) except for a variable learning rate k. It can
be shown, that if |C| 0 and ρ k satisfies the three conditions:

1. (3.1.47a)

2. (3.1.47b)

3. (3.1.47c)

then, wk converges to w* in Equation (3.1.45) asymptotically in the mean square; i.e.,

(3.1.48)

The criterion function in Equation (3.1.43) is of the form and is known as a regression
function. The iterative algorithm in Equation (3.1.46) is also known as a stochastic approximation
procedure (or Kiefer-Wolfowitz or Robbins-Monro procedure). For a thorough discussion of stochastic
approximation theory, the reader is referred to Wasan (1969).

Example 3.1.1: In this example, we present the results of a set of simulations which should help give
some insight into the dynamics of the batch and incremental LMS learning rules. Specifically, we are
interested in comparing the convergence behavior of the discrete-time dynamical systems in Equations
(3.1.34) and (3.1.35). Consider the training set depicted in Figure 3.1.4 for a simple mapping problem.
The ten squares and ten filled circles in this figure are positioned at the points whose coordinates
(x1, x2) specify the two components of the input vectors. The squares and circles are to be mapped to
the targets +1 and −1, respectively. For example, the left most square in the figure represents the
training pair {[0, 1]T, 1}. Similarly, the right most circle represents the training pair {[2, 2]T, −1}.
Figure 3.1.5 shows plots for the evolution of the square of the distance between the vector wk and the
(computed) minimum SSE solution, w* for batch LMS (dashed line) and incremental LMS (solid line).
In both simulations, the learning rate (step size) was set to 0.005. The initial search point w1 was set to
[0, 0]T. For the incremental LMS rule, the training examples are selected randomly from the training
set. The batch LMS rule converges to the optimal solution w* in less than 100 steps. Incremental LMS
requires more learning steps, on the order of 2,000 steps, to converge to a small neighborhood of w*.
The fluctuations in ||wk − w*||2 in this neighborhood are less than 0.02 as can be seen from Figure 3.1.5.
The effect of a deterministic order of presentation of the training examples on the incremental LMS
rule is shown by the solid line in Figure 3.1.6. Here, the training examples are presented in a predefined
order, which did not change during training. The same initialization and step size are used as before. In
order to allow for a more meaningful comparison between the two LMS rule versions, one learning
step of incremental LMS is taken to mean a full cycle through the 20 samples. For comparison, the
simulation result with batch LMS learning is plotted in the figure (see dashed line). These results
indicate a very similar behavior in the convergence characteristics of incremental and batch LMS
learning. This is so because of the small step size used. Both cases show asymptotic convergence
toward the optimal solution w*, but with a relatively faster convergence of the batch LMS rule near w*,
this is attributed to the use of more accurate gradient information.

Figure 3.1.4. A 20 sample training set used in the simulations associated with Example 3.1.1. Points
signified by a square and a filled circle should map into +1 and , respectively.

Figure 3.1.5. Plots (learning curves) for the distance square between the search point wk and the
minimum SSE solution w* generated using two versions of the LMS learning rule. The dashed line
corresponds to the batch LMS rule in Equation (3.1.34). The solid line corresponds to the incremental
LMS rule in Equation (3.1.35) with a random order of presentation of the training patterns. In both
cases w1 = 0 and = 0.005 are used. Note the logarithmic scale for the iteration number k.

Figure 3.1.6. Learning curves for the batch LMS (dashed line) and incremental LMS (solid line)
learning rules for the data in Figure 3.1.4. The result for the batch LMS rule shown here is identical to
the one shown in Figure 3.1.5 (this result looks different only because of the present use of a linear
scale for the horizontal axis). The incremental LMS rule results shown assume a deterministic, fixed
order of presentation of the training patterns. Also, for the incremental LMS case, wk represents the
weight vector after the completion of the kth learning "cycle". Here, one cycle corresponds to 20
consecutive learning iterations.
Correlation Learning Rule

The Correlation rule is derived by starting from the criterion function

(3.1.49)

where yi = (xi)Tw, and performing gradient descent to minimize J. Note that minimizing J(w) is
equivalent to maximizing the correlation between the desired target and the corresponding linear unit's
output for all xi, i = 1, 2, ..., m. Now, employing steepest gradient descent to minimize J(w) leads to the
learning rule:

(3.1.50)

By setting to 1 and completing one learning cycle using Equation (3.1.50), we arrive at the weight
vector w* given by:

(3.1.51)

where X and d are as defined earlier. Note that Equation (3.1.51) leads to the minimum SSE solution in
Equation (3.1.38) if X† = X. This is only possible if the training vectors xk are encoded such that XXT
is the identity matrix (i.e., the xk's are orthonormal). Correlation learning is further explored in Section
7.1.1 in Chapter 7.

Another version of this type of learning is the covariance learning rule. This rule is obtained by steepest

gradient descent on the criterion function . Here, <y>


and <d> are computed averages, over all training pairs, for the unit's output and the desired target,
respectively. Covariance learning provides the basis of the cascade-correlation net presented in Section
6.3.2.

3.1.3 Extension of µ -LMS Rule to Units with Differentiable Activation Functions: Delta Rule

The following rule is similar to the -LMS rule except that it allows for units with a differentiable
nonlinear activation function f. Figure 3.1.7 illustrates a unit with a sigmoidal activation function. Here,

the unit's output is with net defined as the vector inner product xTw.
Figure 3.1.7. A computational unit with a differentiable sigmoidal activation function.

Again, consider the training pairs {xi, di}, i = 1, 2, ..., m, with xi Rn+1 ( = 1 for all i) and di [−1,

+1]. Performing gradient descent on the instantaneous SSE criterion function ,


whose gradient is given by

(3.1.52)

leads to the delta rule

(3.1.53)

where and . If f is defined by f(net) = tanh(β net), then its

derivative is . For the "logistic" function, , the

derivative is . Figure 3.1.8 plots f and for the hyperbolic


tangent activation function with . Note how f asymptotically approaches +1 and −1 in the limit as
net approaches + and , respectively.

Figure 3.1.8. Hyperbolic tangent activation function f and its derivative , plotted for
.
One disadvantage of the delta learning rule is immediately apparent upon inspection of the graph of
f '(net) in Figure 3.1.8. In particular, notice how f '(net) 0 when net has large magnitude (i.e., |net| > 3);
these regions are called "flat spots" of f '. In these flat spots, we expect the delta learning rule to
progress very slowly (i.e., very small weight changes even when the error (d − y) is large), because the
magnitude of the weight change in Equation (3.1.53) directly depends on the magnitude of f '(net).
Since slow convergence results in excessive computation time, it would be advantageous to try to
eliminate the flat spot phenomenon when using the delta learning rule. One common flat spot
elimination technique involves replacing f ' by f ' plus a small positive bias . In this case, the weight
update equation reads as:

(3.1.54)

One of the primary advantages of the delta rule is that it has a natural extension which may be used to
train multilayered neural nets. This extension, known as back error propagation, will be discussed in
Chapter 5.

3.1.4 Adaptive Ho-Kashyap (AHK) Learning Rules

Hassoun and Song (1992) proposed a set of adaptive learning rules for classification problems as
enhanced alternatives to the LMS and perceptron learning rules. In the following, three learning rules:
AHK I, AHK II, and AHK III are derived based on gradient descent strategies on an appropriate
criterion function. Two of the proposed learning rules, AHK I and AHK II, are well suited for
generating robust decision surfaces for linearly separable problems. The third training rule, AHK III,
extends these capabilities to find "good" approximate solutions for nonlinearly separable problems. The
three AHK learning rules preserve the simple incremental nature found in the LMS and perceptron
learning rules. The AHK rules also possess additional processing capabilities, such as the ability to
automatically identify critical cluster boundaries and place a linear decision surface in such a way that
it leads to enhanced classification robustness.

Consider a two-class {c1, c2} classification problem with m labeled feature vectors (training vectors)
{xi, di}, i = 1, 2, ..., m. Assume that xi belongs to Rn+1 (with the last component of xi being a constant
bias of value 1) and that di = +1 (−1) if xi c1 (c2). Then, a single perceptron can be trained to correctly
classify the above training pairs if an (n + 1)-dimensional weight vector w is computed which satisfies
the following set of m inequalities (the sgn function is assumed to be the perceptron's activation
function):

(3.1.55)

Next, if we define a set of m new vectors, zi, according to

(3.1.56)

and we let

Z = [ z1 z2 ... zm] (3.1.57)

then Equation (3.1.55) may be rewritten as the single matrix equation


(3.1.58)

Now, defining an m-dimensional positive-valued margin vector b (b > 0) and using it in Equation
(3.1.58), we arrive at the following equivalent form of Equation (3.1.55):

(3.1.59)

Thus, the training of the perceptron is now equivalent to solving Equation (3.1.59) for w, subject to the
constraint b > 0. Ho and Kashyap (1965) proposed an iterative algorithm for solving Equation (3.1.59).
In the Ho-Kashyap algorithm, the components of the margin vector are first initialized to small positive
values, and the pseudo-inverse is used to generate a solution for w (based on the initial guess of b)

which minimizes the SSE criterion function, J(w, b) = || ZTw − b ||2:

w = Z†b (3.1.60)

where Z† = (Z ZT)−1 Z, for m > n + 1. Next, a new estimate for the margin vector is computed by
performing the constrained (b > 0) gradient descent

(3.1.61)

where denotes the absolute value of the components of the argument vector and bk is the "current"
margin vector. A new estimate of w can now be computed using Equation (3.1.60) and employing the
updated margin vector from Equation (3.1.61). This process continues until all the components of ε
are zero (or are sufficiently small and positive), which is an indication of linear separability of the
training set, or until ε < 0, which is an indication of nonlinear separability of the training set (no
solution is found). It can be shown (Ho and Kashyap, 1965; Slansky and Wassel, 1981) that the Ho-
Kashyap procedure converges in a finite number of steps if the training set is linearly separable. For
simulations comparing the above training algorithm to the LMS and perceptron training procedures, the
reader is referred to Hassoun and Clark (1988), Hassoun and Youssef (1989), and Hassoun (1989a).
We will refer to the above algorithm as the direct Ho-Kashyap (DHK) algorithm.

The direct synthesis of the w estimate in Equation (3.1.60) involves a one-time computation of the
pseudo-inverse of Z. However, such computation can be computationally expensive and requires
special treatment when Z ZT is ill-conditioned (i.e., |ZZT| close to zero). An alternative algorithm that
is based on gradient descent principles and which does not require the direct computation of Z† can be
derived. This derivation is presented next.

Starting with the criterion function , gradient descent may be performed


(Slansky and Wassel, 1981) with respect to b and w so that J is minimized subject to the constraint b >
0. The gradient of J with respect to w and b is given by

(3.1.62a)

(3.1.62b)
where the superscripts k and k + 1 represent current and updated values, respectively. One analytic
method for imposing the constraint b > 0 is to replace the gradient in Equation (3.1.62a) by −0.5(ε + |
ε |) with ε as defined in Equation (3.1.61). This leads to the following gradient descent formulation
of the Ho-Kashyap procedure:

(3.1.63a)

and

(3.1.63b)

where 1 and 2 are strictly positive constant learning rates. Because of the requirement that all training
vectors zk (or xk) be present and included in Z, we will refer to the above procedure as the batch mode
adaptive Ho-Kashyap (AHK) procedure. It can be easily shown that if ρ 1 = 0 and b1 = 1, Equation
(3.1.63) reduces to the -LMS learning rule. Furthermore, convergence can be guaranteed (Duda and

Hart, 1973) if 0 < ρ 1 < 2 and 0 < ρ 2 < , where λ max is the largest eigenvalue of the positive
definite matrix Z ZT.

A completely adaptive Ho-Kashyap procedure for solving Equation (3.1.59) is arrived at by starting

from the instantaneous criterion function , which leads to the


following incremental update rules:

(3.1.64a)

and

(3.1.64b)

Here bi represents a scalar margin associated with the xi input. In all of the above Ho-Kashyap learning
procedures, the margin values are initialized to small positive values, and the perceptron weights are
initialized to zero (or small random) values. If full margin error correction is assumed in Equation
(3.1.64a), i.e., ρ 1 = 1, the incremental learning procedure in Equation (3.1.64) reduces to the
heuristically derived procedure reported in Hassoun and Clark (1988). An alternative way of writing
Equation (3.1.64) is

and (3.1.65a)
and (3.1.65b)

where ∆ b and ∆ w signifies the difference between the updated and current values of b and w,
respectively. We will refer to this procedure as the AHK I learning rule. For comparison purposes, it

may be noted that the -LMS rule in Equation (3.1.35) can be written as , with bi held
fixed at +1.

The implied constraint bi > 0 in Equation (3.1.64) and Equation (3.1.65) was realized by starting with a
positive initial margin and restricting the change in ∆ b to positive real values. An alternative, more
flexible way to realize this constraint is to allow both positive and negative changes in ∆ b, except for
the cases where a decrease in bi results in a negative margin. This modification results in the following
alternative AHK II learning rule:

and (3.1.66a)

and (3.1.66b)

In the general case of an adaptive margin, as in Equation (3.1.66), Hassoun and Song (1992) showed
that a sufficient condition for the convergence of the AHK rules, is given by

(3.1.67a)

(3.1.67b)

Another variation results in the AHK III rule which is appropriate for both linearly separable and
nonlinearly separable problems. Here, ∆ w is set to 0 in Equation (3.1.66b). The advantages of the
AHK III rule are that (1) it is capable of adaptively identifying difficult-to-separate class boundaries
and (2) it uses such information to discard nonseparable training vectors and speed up convergence
(Hassoun and Song, 1992). The reader is invited to apply the AHK III as in Problem 3.1.7 for gaining
insight into the dynamics and separation behavior of this learning rule.

Example 3.1.2: In this example, the perceptron, LMS, and AHK learning rules are compared in terms
of the quality of the solution they generate. Consider the simple two-class linearly separable problem
shown earlier in Figure 3.1.4. The -LMS rule of Equation (3.1.35) is used to obtain the solution shown
as a dashed line in Figure 3.1.9. Here, the initial weight vector was set to 0 and a learning rate = 0.005
is used. This solution is not one of the linearly separable solutions for this problem. Four examples of
linearly separable solutions are shown as solid lines in the figure. These solutions are generated using
the perceptron learning rule of Equation (3.1.2), with varying order of input vector presentations and
with a learning rate of = 0.1. Here, it should be noted that the most robust solution, in the sense of

tolerance to noisy input, is given by , which is shown as a dotted line in Figure 3.1.9.
This robust solution was in fact automatically generated by the AHK I learning rule of Equation
(3.1.65).
Figure 3.1.9. LMS generated decision boundary (dashed line) for a two-class linearly separable
problem. For comparison, four solutions generated using the perceptron learning rule are shown (solid
lines). The dotted line is the solution generated by the AHK I rule.

3.1.5 Other Criterion Functions

The SSE criterion function in Equation (3.1.32) is not the only possible choice. We have already seen
other alternative functions such as the ones in Equations (3.1.20), (3.1.24), and (3.1.25). In general, any
differentiable function that is minimized upon setting yi = di, for i = 1, 2, ..., m, could be used. One
possible generalization of SSE is the Minkowski-r criterion function (Hanson and Burr, 1988) given by

(3.1.68)

or its instantaneous version

(3.1.69)

Figure 3.1.10 shows a plot of |d − y|r for r = 1, 2, and 20. The general form of the gradient of this
criterion function is given by

(3.1.70)

Note that for r = 2 this reduces to the gradient of the SSE criterion function given by Equation (3.1.52).
If r = 1, then J(w) = |d − y| with the gradient (note that the gradient of J(w) does not exist at the solution
points d = y)

(3.1.71)

In this case, the criterion function in Equation (3.1.68) is known as the Manhattan norm. For ,
a supremum error measure is approached.

Figure 3.1.10. A family of instantaneous Minkowski-r criterion functions.

A small r gives less weight for large deviations and tends to reduce the influence of the outermost
points in the input space during learning. It can be shown, for a linear unit with normally distributed
inputs, that r = 2 is an appropriate choice in the sense of both minimum SSE and minimum probability
of prediction error (maximum likelihood). The proof is as follows.

Consider the training pairs {xi, di}, i = 1, 2, ..., m. Assume that the vectors xi are drawn randomly and
independently from a normal distribution. Then a linear unit with a fixed but unknown weight vector w

outputs the estimate when presented with input xi. Since a weighted sum of independent
normally distributed random variables is itself normally distributed [e.g., see Mosteller et al. (1970)],

then is normally distributed. Thus the prediction error is normally


distributed with mean zero and some variance, . This allows us to express the conditional
probability density for observing target di, given w, upon the presentation of xi as

(3.1.72)

This function is also known as the likelihood of w with respect to observation di. The maximum
likelihood estimate of w is that value of w which maximizes the probability of occurrence of
observation di for input xi. The likelihood of w with respect to the whole training set is the joint
distribution

(3.1.73)

Maximizing the above likelihood is equivalent to maximizing the log-likelihood function:

(3.1.74)

Since the term is a constant, maximizing the log-likelihood function in Equation


(3.1.74) is equivalent to minimizing the SSE criterion

(3.1.75)

Therefore, with the assumption of a linear unit (ADALINE) with normally distributed inputs, the SSE
criterion is optimal in the sense of minimizing prediction error. However, if the input distribution is
non-Gaussian, then the SSE criterion will not possess maximum likelihood properties. See Mosteller
and Tukey (1980) for a more thorough discussion on the maximum likelihood estimation technique.

If the distribution of the training patterns has a heavy tail such as the Laplace-type distribution, r = 1
will be a better criterion function choice. This criterion function is known as robust regression since it
is more robust to an outlier training sample than r = 2. Finally, 1 < r < 2 is appropriate to use for
pseudo-Gaussian distributions where the distribution tails are more pronounced than in the Gaussian.
Another criterion function that can be used (Baum and Wilczek, 1988; Hopfield, 1987; Solla et al.,
1988) is the instantaneous relative entropy error measure (Kullback, 1959) defined by

(3.1.76)

where d belongs to the open interval (−1, +1). As before, J(w) 0, and if y = d then J(w) = 0. If
y = f(net) = tanh(β net), the gradient of Equation (3.1.76) is

(3.1.77)

The factor f '(net) in Equations (3.1.53) and (3.1.70) is missing from Equation (3.1.77). This eliminates
the flat-spot encountered in the delta rule and makes the training here more like µ -LMS (note,

however, that y here is given by y = ). This entropy criterion is "well formed" in the
sense that gradient descent over such a function will result in a linearly separable solution, if one exists
(Wittner and Denker, 1988; Hertz et al., 1991). On the other hand, gradient descent on the SSE
criterion function does not share this property, since it may fail to find a linearly separable solution as
demonstrated in Example 3.1.2.

In order for gradient descent search to find a solution w* in the desired linearly separable region, we
need to use a well-formed criterion function. Consider the following general criterion function

(3.1.78)

where

Let . Now, we say is "well-formed" if g(s) is differentiable and satisfies the following
conditions (Wittner and Denker, 1988):

1. For all s, ; i.e., g does not push in the wrong direction.

2. There exists ε > 0, such that for all s 0; i.e., g keeps pushing if there is a
misclassification.

3. g(s) is bounded below.

For a single unit with weight vector w, it can be shown (Wittner and Denker, 1988) that if the criterion
function is well-formed, then gradient descent is guaranteed to enter the region of linearly separable
solutions w*, provided that such a region exists.
Example 3.1.3: The perceptron criterion function in Equation (3.1.20) is a well-formed criterion
function since it satisfies the above conditions:

1. , thus g(s) = −s and = 1 > 0 for all s.

2. = 1 ε > 0 for all s 0.

3. g(s) is bounded below, since .

3.1.6 Extension of Gradient-Descent-Based Learning to Stochastic Units

The linear threshold gate, perceptron, and ADALINE are examples of deterministic units; for a given
input, the unit always responds with the same output. On the other hand, a stochastic unit has a binary-
valued output which is a probabilistic function of the input activity net, as depicted in Figure 3.1.11.

Figure 3.1.11. A stochastic unit.

Formally, y is given by

(3.1.79)

One possible probability function is . Hence,

. Also, note that the expected value of y, <y>, is given


by

(3.1.80)

Stochastic units are the basis for reinforcement learning networks, as is shown in the next section. Also,
these units allow for a natural mapping of optimal stochastic learning and retrieval methods onto neural
networks, as discussed in Section 8.3.

Let us now define a SSE criterion function in terms of the mean output of the stochastic unit:
(3.1.81)

Employing gradient descent, we arrive at the update rule

(3.1.82)

In the incremental update mode, we have the following update rule:

This learning rule is identical in form to the delta learning rule given in Equation (3.1.53), which used a
deterministic unit with an activation function f (net) = tanh(net). Therefore, in an average sense, the
stochastic unit learning rule in Equation (3.1.83) leads to a weight vector which is equal to that
obtained using the delta rule for a deterministic unit with a hyperbolic tangent activation function.

3.2 Reinforcement Learning

Reinforcement learning is a process of trial and error which is designed to maximize the expected value
of a criterion function known as a "reinforcement signal." The basic idea of reinforcement learning has
its origins in psychology in connection with experimental studies of animal learning (Thorndike, 1911).
In its simplest form, reinforcement learning is based on the idea that if an action is followed by an
"improvement" in the state of affairs, then the tendency to produce that action is strengthened, i.e.,
reinforced. Otherwise, the tendency of the system to produce that action is weakened (Barto and Singh,
1991; Sutton et al., 1991).

Given a training set of the form {xk, rk}, k = 1, 2, ..., m, where and rk is an evaluative signal

(normally ) which is supplied by a "critic". The idea here is not to associate xk with rk
as in supervised learning. Rather, rk is a reinforcement signal which informs the unit being trained
about its performance on the input xk. So rk evaluates the "appropriateness" of the unit's output, yk, due
to the input xk. Usually, rk gives no indication on what yk should be. It is therefore important for the
unit to be stochastic so that a mechanism of exploration of the output space is present.

One may view supervised learning in a stochastic unit as an extreme case of reinforcement learning,
where the output of the unit is binary and there is one correct output for each input. Here, rk becomes
the desired target dk. Also, we may use the gradient descent learning rule of Equation (3.1.83) to train
the stochastic unit. In general, the reinforcement signal itself may be stochastic such that the pair
{xk, rk} only provides the "probability" of positive reinforcement. The most general extreme for
reinforcement learning (and the most difficult) is where both the reinforcement signal and the input
patterns depend arbitrarily on the past history of the stochastic unit's output. An example would be the
stabilization of an unstable dynamical system or even a game of chess where the reinforcement signal
arrives at the end of a sequence of player moves.

3.2.1 Associative Reward-Penalty Reinforcement Learning Rule


We now present a reinforcement learning rule due to Barto and Anandan (1985), which is known as the
associative reward-penalty (Arp) algorithm. We discuss it here in the context of a single stochastic unit.
Motivated by Equation (3.1.83), we may express the Arp reinforcement rule as

(3.2.1)

where

(3.2.2)

and

(3.2.3)

with + >> > 0. The derivative term in Equation (3.2.1) may be eliminated without
affecting the general behavior of this learning rule. In this case, the resulting learning rule corresponds
to steepest descent on the relative entropy criterion function. The setting of dk according to Equation
(3.2.2) guides the unit to do what it just did if yk is "good" and to do the opposite if not (Widrow et al.,
1973). In general, this makes the dynamics of wk in Equation (3.2.1) substantially different from that of
wk in the supervised stochastic learning rule in Equation (3.1.83). When learning converges, the

approach 1 making the unit effectively deterministic; the unit's output approaches the state
providing the largest average reinforcement on the training set.

One variation of Arp (Barto and Jordan, 1987) utilizes a continuous-valued or graded reinforcement
signal, , and has the form:

(3.2.4)

Another variation uses rk = 1 and has the simple form:

(3.2.5)

This later rule is more amenable to theoretical analysis, as is shown in Section 4.5, where it will be

shown that the rule tends to maximize the average reinforcement signal, . Reinforcement learning
speed may be improved if batch mode training is used (Barto and Jordan, 1987; Ackley and Littman,
1990). Here, a given pattern xk is presented several times, and the accumulation of all the weight
changes is used to update wk. Then, pattern xk+1 is presented several times, and so on. For an overview
treatment of the theory of reinforcement learning, see Barto (1985) and Williams (1992). Also, see the
Special Issue on Reinforcement Learning, edited by Sutton (1992), for theoretical and practical
considerations.
3.3 Unsupervised Learning

In unsupervised learning, there is no teacher signal. We are given a training set {xi; i = 1, 2, ..., m}, of
unlabeled vectors in Rn. The objective is to categorize or discover, features or regularities in the
training data. In some cases the xi's must be mapped into a lower dimensional set of patterns such that
any topological relations existing among the xi's are preserved among the new set of patterns.
Normally, the success of unsupervised learning hinges on some appropriately designed network which
encompasses a task-independent criterion of the quality of representation that the network is required to
learn. Here, the weights of the network are to be optimized with respect to this criterion.

Our interest here is in training networks of simple units to perform the above tasks. In the remainder of
this chapter, some basic unsupervised learning rules for a single unit and for simple networks are
introduced. The following three classes of unsupervised rules are considered: Hebbian learning,
competitive learning, and self-organizing feature map learning. Hebbian learning is treated first. Then,
competitive learning and self-organizing feature map learning are covered in Sections 3.4 and 3.5,
respectively.

3.3.1 Hebbian Learning

The rules considered in this section are motivated by the classical Hebbian synaptic modification
hypothesis (Hebb, 1949). Hebb suggested that biological synaptic efficaces (w) change in proportion to
the correlation between the firing of the pre- and post-synaptic neurons, x and y, respectively, which
may be stated formally as ( Stent, 1973; Changeux and Danchin, 1976)

(3.3.1)

where > 0 and the unit's output is .

Let us now assume that the input vectors are drawn from an arbitrary probability distribution p(x). Let
us also assume that the network being trained consists of a single unit. At each time k, we will present a
vector x, randomly drawn from p(x), to this unit. We will employ the Hebbian rule in Equation (3.3.1)

to update the weight vector w. The expected weight change can be evaluated by averaging
Equation (3.3.1) over all inputs x. This gives

(3.3.2)

or, assuming x and w are statistically independent,

(3.3.3)

Since, at equilibrium = 0, Equation (3.3.3) leads to , and thus is the only

equilibrium state. Here, C = is known as the autocorrelation matrix and is given by


(3.3.4)

The terms on the main diagonal of C are the mean squares of the input components, and the cross terms
are the cross correlations among the input components. C is a Hermitian matrix (real and symmetric).
Thus, its eigenvalues, i, i = 1, 2, ..., n, are positive real or zero, and it has orthogonal eigenvectors, c(i).
From the definition of an eigenvector, each c(i) satisfies the relation Cc(i) = ic(i).

It can be shown that the solution w* = 0 is not stable (see Section 4.3.1). Therefore, Equation (3.3.3) is
unstable and it drives w to infinite magnitude, with a direction parallel to that of the eigenvector of C
with the largest eigenvalue. It will be shown, in Section 4.3.1, that this learning rule tends to maximize

the mean square of y, . In other words, this rule is driven to maximize the variance of the output
of a linear unit, assuming that y has zero mean. A zero mean y can be achieved if the unit inputs are
independent random variables with zero mean. One way to prevent the divergence of the Hebbian

learning rule in Equation (3.3.1) is to normalize to '1' after each learning step ( von der Malsburg,
1973; Rubner and Tavan, 1989). This leads to the update rule:

(3.3.5)

In the following, we briefly describe additional stable Hebbian-type learning rules. The detailed
analysis of these rules is deferred to Chapter 4.

3.3.2 Oja's Rule

An alternative approach for stabilizing the Hebbian rule is to modify it by adding a weight decay
proportional to y2 (Oja, 1982). This results in Oja's rule:

(3.3.6)

Oja's rule converges in the mean to a state w* which maximizes the mean value of y2, , subject to
the constraint ||w|| = 1. It can also be shown that the solution w* is the principal eigenvector (the one
with the largest corresponding eigenvalue) of C (Oja, 1982; Oja and Karhunen, 1985). The analysis of
this rule is covered in Chapter 4.

3.3.3 Yuille et al. Rule


Other modifications to the Hebbian rule have been proposed to prevent divergence. Yuille et al. (1989)
proposed the rule

(3.3.7)

It can be shown (see Problem 3.3.4) that, in an average sense, the weight vector update is given by
gradient descent on the criterion function:

(3.3.8)

In Section 4.3.4, it is shown that Equation (3.3.7) converges to a vector w* that points in the same (or
opposite) direction of the principal eigenvector of C and whose norm is given by the square root of the
largest eigenvalue of C.

Example 3.3.1: In this example, the convergence behavior of Oja's and Yuille et al. learning rules are
demonstrated on zero-mean random data. Figures 3.3.1 through 3.3.4 show two sets of simulation
results for the evolution of the weight vector of a single linear unit trained with Oja's rule and with
Yuille et al. rule. Figures 3.3.1 and 3.3.2 show the behavior of the norm and the direction (cosine of the
angle between w and the eigenvector of C with the largest eigenvalue) of the weight vector w as a
function of iteration number, k, respectively. In this simulation, the data (training set) consists of sixty
15-dimensional vectors whose components are generated randomly and independently from a uniform
distribution in the range [−0.5, +0.5]. In this particular case, the data set leads to a correlation matrix
having its two largest eigenvalues equal to 0.1578 and 0.1515, respectively. During learning, the
training vectors are presented in a fixed, cyclic order, and a learning rate is used. As can be
seen from Figure 3.3.1, the length of wk (practically) converges after 3,000 iterations. Here, both Oja
and Yuille et al. rules converge to the theoretically predicted values of 1 and , respectively.
The two rules exhibit an almost identical weight vector direction evolution as depicted in Figure 3.3.2.
The figure shows an initial low overlap between the starting weight vector and the principal
eigenvector of the data correlation matrix. This overlap increases slowly at first, but then increases fast
towards 1. As the direction of w approaches that of the principal eigenvector (i.e., cos approaches 1)
the convergence becomes slow again. Note that Figure 3.3.2 only shows the evolution of cos over the
first 3600 iterations. Beyond this point, the convergence becomes very slow. This is due to the uniform
nature of the data, which does not allow for the principal eigenvector to dominate all other
eigenvectors. Thus, a strong competition emerges among several eigenvectors, each attempting to align
wk along its own direction; the end result being the slow convergence of cos .

The second set of simulations involves a training set of sixty 15-dimensional vectors drawn randomly
from a normal distribution with zero mean and variance of 1. This data leads to a correlation matrix C
with a dominating eigenvector, relative to that of the previous data set. Here, the largest two
eigenvalues of C are equal to 2.1172 and 1.6299, respectively. Again, Oja and Yuille et al. rules are
used, but with a learning rate of 0.01. Figure 3.3.3 shows a better behaved convergence for Oja's rule as
compared to Yuille et al. rule. The later rule exhibits an oscillatory behavior in ||wk|| about its
theoretical asymptotic value, . As for the convergence of the direction of w, Figure 3.3.4
show a comparable behavior for both rules. Here, the existence of a dominating eigenvector for C is
responsible for the relatively faster convergence of cos , as compared to the earlier simulation in Figure
3.3.2. Finally, we note that the oscillatory behavior in Figures 3.3.3 and 3.3.4 can be significantly
reduced by resorting to smaller constant learning rates or by using a decaying learning rate of the form

. This, however, leads to slower convergence speeds for both ||wk|| and cos .
Figure 3.3.1. Weight vector magnitude vs. time for Oja's rule (solid curve) and for Yuille et al. rule
(dashed curve) with = 0.02. The training set consists of sixty 15-dimensional real-valued vectors
whose components are generated according to a uniform random distribution in the range [−0.5, +0.5].

Figure 3.3.2. Evolution of the cosine of the angle between the weight vector and the principal
eigenvector of the correlation matrix for Oja's rule (solid curve) and for Yuille et al. rule (dashed curve)
with = 0.02 (the dashed line is hard to see because it overlaps with the solid line). The training set is
the same as for Figure 3.3.1.

Figure 3.3.3. Weight vector magnitude vs. time for Oja's rule (solid curve) and for Yuille et al. rule
(dashed curve) with = 0.01. The training set consists of sixty 15-dimensional real-valued vectors
whose components are generated according to N(0, 1).

Figure 3.3.4. Evolution of the cosine of the angle between the weight vector and the principal
eigenvector of the correlation matrix for Oja's rule (solid curve) and for Yuille et al. rule (dashed curve)
with = 0.01. The training set is the same as for Figure 3.3.3.
3.3.4 Linsker's Rule

Linsker (1986, 1988) proposed and studied the general unsupervised learning rule:

(3.3.9)

subject to bounding constraints on the weights , with y = wTx and

. The average weight change in Equation (3.3.9) is given by

(3.3.10)

where . If we set a > 0, bi = a for i = 1, 2, ..., n, and d = 0 in Equation (3.3.10), we get

(3.3.11)

where 1 is an n-dimensional column vector of ones. Equation (3.3.11) gives the ith average weight
change:

(3.3.12)

where Ci is the ith row of C. If we assume that the components of x are random and are drawn from the
same probability distribution with mean , then for all i, and Equation (3.3.11) reduces to

(3.3.13)

where was used. Here, it can be easily shown that Equation (3.3.13) has the following
associated criterion function:

(3.3.14)

Therefore, by noting that wTCw is the mean square of the unit's output activity, , the simplified

form of in Equation (3.3.13) corresponds to maximizing subject to the constraint

. It can be shown that this rule is unstable, but with the restriction , the

final state will be clamped at a boundary value, or . If is large enough, w− = 1,


w+ = +1, and n is odd, then the above rule converges to a weight vector with weights equal to w+

and the remaining weights equal to w−. The weight vector configuration is such that is
maximized. For the n even case, one of the weights at w− will be pushed towards zero so that

is maintained.

3.3.5 Hebbian Learning in a Network Setting: Principal Component Analysis (PCA)

Hebbian learning can lead to more interesting computational capabilities when applied to a network of
units. In this section, we apply unsupervised Hebbian learning in a simple network setting to extract the
m principal directions of a given set of data (i.e., the leading eigenvector directions of the input vectors'
auto-correlation matrix).

Amari (1977a) and later Linsker (1988) pointed out that principal component analysis (PCA) is
equivalent to maximizing the information content in the outputs of a network of linear units. The aim
of PCA is to extract m normalized orthogonal vectors, ui, i = 1, 2, ..., m, in the input space that account
for as much of the data's variance as possible. Subsequently, the n-dimensional input data (vectors x)
may be transformed to a lower m-dimensional space without losing essential intrinsic information. This
can be done by projecting the input vectors onto the m-dimensional subspace spanned by the extracted
orthogonal vectors, ui, according to the inner products xTui. Since m is smaller than n, the data
undergoes a dimensionality reduction. This, in turn, makes subsequent processing of the data (e.g.,
clustering or classification) much easier to handle.

The following is an outline for a direct optimization-based method for determining the ui vectors. Let

, be an input vector generated according to a zero-mean probability distribution p(x). Let u


denote a vector in Rn, onto which the input vectors are to be projected. The projection xTu is the linear
sum of n zero-mean random variables, which is itself a zero-mean random variable. Here, the objective

is to find the solution(s) u* which maximizes , the variance of the projection xTu with
respect to p(x), subject to ||u|| = 1. In other words, we are interested in finding the maxima w* of the
criterion function

(3.3.15)

from which the unity norm solution(s) u* can be computed as with ||w*|| 0. Now, by

noting that , and recalling that is the


autocorrelation matrix C, Equation (3.3.15) may be expressed as

(3.3.16)
The extreme points of J(w) are the solutions to J(w) = 0, which gives

or

(3.3.17)

The solutions to Equation (3.3.17) are w = ac(i), i = 1, 2, ..., n, a R. In other words, the maxima w* of
J(w) must point in the same or opposite direction of one of the eigenvectors of C. Upon careful
examination of the Hessian of J(w) [as in Section 4.3.2], we find that the only maximum exists at
w* = ac(1) for some finite real-valued a (in this case, J(w*) = 1). Therefore, the variance of the

projection xTu is maximized for u = u1 = = c(1). Next, we repeat the above maximization of
J(w) in Equation (3.3.15), but with the additional requirement that the vector w be orthogonal to c(1).
Here, it can be readily seen that the maximum of J(w) is equal to 2, and occurs at w* = ac(2). Thus,
u2 = c(2). Similarly, the solution u3 = c(3) maximizes J under the constraint that u3 be orthogonal to u1
and u2, simultaneously. Continuing this way, we arrive at the m principal directions u1 through um.
Again, these vectors are ordered so that u1 points in the direction of the maximum data variance, and
the second vector u2 points in the direction of maximum variance in the subspace orthogonal to u1, and
so on. Now, the projections xTui, i = 1, 2, ..., m, are called the principal components of the data; these
projections are equivalent to the ones obtained by the classical Karhunen-Loève transformation of
statistics (Karhunen, 1947; Loève, 1963). Note that the previous Hebbian rules discussed in the single

linear unit setting all maximize , the output signal variance, and hence they extract the first
principal component of the zero-mean data. If the data has a non-zero mean, then we subtract the mean
from it before extracting the principal components.

PCA in a network of Interacting Units

Here, an m-output network is desired which is capable of incrementally and efficiently computing the
first m principal components of a given set of vectors in Rn. Consider a network of m linear units, m <
n. Let wi be the weight vector of the ith unit. Oja (1989) extended his learning rule in Equation (3.3.6)
to the m-unit network according to (we will drop the k superscript here for clarity):

(3.3.18)

where wij is the jth weight for unit i and yi is its output. Another rule proposed by Sanger (1989) is
given by:

(3.3.19)
Equations (3.3.18) and (3.3.19) require communication between the units in the network during
learning. Equation (3.3.18) requires the jth input signal xj, as well as the output signals yk of all units, to
be available when adjusting the jth weight of unit i. Each signal yk is modulated by the jth weight of
unit k and is fed back as an inhibitory input to unit i. Thus, unit i can be viewed as employing the
original Hebbian rule of Equation (3.3.1) to update its jth weight, but with an effective input signal
whose jth component is given by the term inside parentheses in Equation (3.3.18). Sanger's rule
employs similar feedback except that the ith unit only receives modulated output signals generated by
units with index k, where .

The above two rules are identical to Oja's rule in Equation (3.3.6) for m = 1. For m > 1, they only differ
by the upper limit on the summation. Both rules converge to wi's that are orthogonal. Oja's rule does
not generally find the eigenvector directions of C. However, in this case the m weight vectors converge
to span the same subspace as the first m eigenvectors of C. Here, the weight vectors depend on the
initial conditions and on the order of presentation of the input data, and therefore differ individually
from trial to trial. On the other hand, Sanger's rule is insensitive to initial conditions and to the order of

presentation of input data. It converges to , in order, where the first unit (i = 1) has

. Some insights into the convergence behavior of this rule/PCA net can be gained by
exploring the analysis in Problems 3.3.5 and 3.3.6. Additional analysis is given in Section 4.4.

Sanger (1989) gave a convergence proof of the above PCA net employing Equation (3.3.19) under the

assumption with and . The significance of this proof is that it


guarantees the dynamics in Equation (3.3.19) to find the first m eigenvectors of the auto-correlation
matrix C (assuming that the eigenvalues through m are distinct). An equally important property of
Sanger's PCA net is that there is no need to compute the full correlation matrix C. Rather, the first
eigenvectors of C are computed by the net adaptively and directly from the input vectors. This property
can lead to significant savings in computational effort if the dimension of the input vectors is very large
compared to the desired number of principal components to be extracted. For an interesting application
of PCA to image coding/compression, the reader is referred to Gonzalez and Wintz (1987) and Sanger
(1989). See also Section 5.3.6.

PCA in a Single Layer network with Adaptive Lateral Connections

Another approach for PCA is to use the single layer network with m linear units and trainable lateral
connections between units, as shown in Figure 3.3.5 (Rubner and Tavan, 1989).
Figure 3.3.5: PCA network with adaptive lateral connections.

The lateral connections uij are present from unit j to unit i only if i > j. The weights wij connecting the
inputs xk to the units are updated according to the simple normalized Hebbian learning rule:

(3.3.20)

On the other hand, the lateral weights employ anti-Hebbian learning, in the form:

(3.3.21)

where > 0. Note that the first unit with index 1 extracts c(1). The second unit tries to do the same
except that the lateral connection u21 from unit 1 inhibits y2 from approaching c(1), hence y2 is forced to
settle for the 2nd principal direction, namely c(2), and so on. Thus, this network extracts the first m
principal data directions in descending order, just as Sanger's network. Since the principal directions
are orthogonal, the correlation's yi yj approach zero as convergence is approached and thus the uij
weights are driven to zero.

3.3.6 Nonlinear PCA

PCA networks, such as those discussed above, extract principal components which provide an optimal
linear mapping from the original input space to a lower dimensional output space whose dimensions
are determined by the number of linear units in the network. The optimality of this mapping is with
respect to the second order statistics of the training set {xk}.

Optimal PCA mapping based on more complex statistical criteria are also possible if nonlinear units are
used (Oja, 1991; Taylor and Coombes, 1993). Here, the extracted principal components can be thought
of as the eigenvectors of the matrix of higher-order statistics which is a generalization of the second
order statistics matrix (correlation matrix C). The nonlinearities implicitly introduce higher-order
moments into the optimal solution.

Two natural ways of introducing nonlinearities into the PCA net are via higher-order units or via units
with nonlinear activations. In order to see how higher-order units lead to higher-order statistics PCA,
consider the case of a simple network of a single quadratic unit of n-inputs xi, i = 1, 2, ..., n. The
input/output relation for this quadratic unit is given by:

(3.3.22)

Another way of interpreting Equation (3.3.22) is to write it in the form of a linear unit, such as

(3.3.23)

where

z = [ x1 x2 x3 ... xn x1x2 ... x1xn x2x3 ... ]T

and

w = [w1 w2 ... wn w11 w12 ... w1n w22 w23 ... wnn]T

is a vector of real valued parameters. Therefore, the n-input quadratic unit is equivalent to a linear unit

receiving its inputs from a fixed preprocessing layer. This preprocessing layer
transforms the original input vectors xk into higher dimensional vectors zk, as in a QTG [refer to
Section (1.1.2)].

Now, if stable Hebbian learning is used to adapt the w parameter vector, this vector will stabilize at the

principal eigenvector of the correlation matrix Cz = . This matrix can be written in terms of the
inputs xi as:
Note that C1 = , as in Equation (3.3.4), and that C2 = . The matrices C2 and C3 reflect third
order statistics and C4 reflects fourth order statistics. Yet higher-order statistics can be realized by
allowing for higher-order terms of the xi's in z.

Extraction of higher-order statistics is also possible if units with nonlinear activation functions are used
(e.g., sigmoidal activation units). This can be seen by employing Taylor series expansion of the output
of a nonlinear unit y = f (wTx) at wTx = 0. Here, it is assumed that all derivatives of f exist. This
expansion allows us to write this unit's output as

(3.3.25)

which may be interpreted as the output of a high-order (polynomial) unit (see Problem 3.3.7).
Therefore, higher-order principal component extraction is expected when Hebbian-type learning is
applied to this unit. For further exploration in nonlinear PCA, the reader may consult Karhunen (1994),
Sudjianto and Hassoun (1994), and Xu (1994). Also, see Section 5.3.6.

3.4 Competitive Learning

In the previous section, we considered simple Hebbian-based networks of linear units which employed
some degree of competition through lateral inhibition in order for each unit to capture a principal
component of the training set. In this section, we extend this notion of competition among units and
specialization of units to tackle a different class of problems involving clustering of unlabeled data or
vector quantization. Here, a network of binary-valued outputs, with only one "on" at a time, could tell
which of several categories an input belongs to. These categories are to be discovered by the network
on the basis of correlations in the input data. The network would then classify each cluster of "similar"
input data as a single output class.

3.4.1 Simple Competitive Learning

Because we are now dealing with competition, it only makes sense to consider a group of interacting
units. We assume the simplest architecture where we have a single layer of units, each receiving the
same input x Rn and producing an output yi. We also assume that only one unit is active at a given time.
This active unit is called the "winner" and is determined as the unit with the largest weighted-sum

, where

(3.4.1)

and xk is the current input. Thus, unit i is the winning unit if

(3.4.2)

which may be written as

(3.4.3)
if for all i = 1, 2, ..., m. Thus, the winner is the node with the weight vector closest (in a
Euclidean distance sense) to the input vector. It is interesting to note that lateral inhibition may be
employed here in order to implement the "winner-take-all" operation in Equation (3.4.2) or (3.4.3).
This is similar to what we have described in the previous section with a slight variation: Each unit
inhibits all other units and self-excites itself, as shown in Figure 3.4.1.

Figure 3.4.1. Single layer competitive network.

In order to assure winner-take-all operation, a proper choice of lateral weights and unit activation
functions must be made (e.g., see Grossberg, 1976, and Lippmann, 1987). One possible choice for the
lateral weights is

(3.4.4)

where 0 < < and m is the number of units in the network. An appropriate activation function for this
type of network is shown in Figure 3.4.2 where T is chosen such that the outputs yi do not saturate at 1
before convergence of the winner-take-all competition; after convergence, only the winning unit will
saturate at 1 with all other units having zero outputs. Note, however, that if one is training the net as
part of a computer simulation, there is no need for the winner-take-all net to be implemented explicitly;
it is more efficient from a computation point of view to perform the winner selection by direct search
for the maximum neti. Thus far, we have only described the competition mechanism of the competitive
learning technique. Next, we give a learning equation for weight updating.
Figure 3.4.2. Activation function for units in the competitive network of Figure 3.4.1.

For a given input xk drawn from a random distribution p(x), the weights of the winning unit are
updated (the weights of all other units are left unchanged) according to (Grossberg, 1969; von der
Malsburg, 1973; Rumelhart and Zipser, 1985):

(3.4.5)

If the magnitudes of the input vectors contain no useful information, a more appropriate rule to use is

(3.4.6)

The above rules tend to tilt the weight vector of the current winning unit in the direction of the current
input. The cumulative effect of the repetitive application of the above rules can be described as follows.
Let us view the input and weight vectors as points scattered on the surface of a hypersphere (or a circle
as in Figure 3.4.3). The effect of the application of the competitive learning rule is to sensitize certain
units towards neighboring clusters of input data. Ultimately, some units (frequent winner units) will
evolve so that their weight vector points towards the "center of mass" of the nearest significant dense
cluster of data points. This effect is illustrated pictorially for a simple example in Figure 3.4.3.
(a) (b)

Figure 3.4.3. Simple competitive learning. (a) Initial weight vectors; (b) weight vector configuration
after performing simple competitive learning.

At the onset of learning, we do not know the number of clusters in the data set. So we normally
overestimate this number by including excess units in the network. This means that after convergence,
some units will be redundant in the sense that they do not evolve significantly and thus do not capture
any data clusters. This can be seen, for example, in Figure 3.4.3 where the weight vector w1 does not
significantly evolve throughout the computation. These are typically the units that are initialized to
points in the weight space that have relatively low overlap with the training data points; such units may
still be desirable since they may capture new clusters if the underlying distribution p(x) changes in
time, or, if desired, their probability of occurrence may be reduced by using some of the following
heuristics: (1) Initialize the weight vectors to randomly selected sample input vectors; (2) use "leaky
learning" (Rumelhart and Zipser, 1985). Here, all loser units are also updated in response to an input
vector, but with a learning rate much smaller than the one used for the winner unit; (3) update the
neighboring units of the winner unit; and/or (4) smear the input vectors with added noise using a long-
tailed distribution (Szu, 1986).

After learning has converged, it is necessary to "calibrate" the above clustering network in order to
determine the number of units representing the various learned clusters, and cluster membership. Here,
the weights of the net are held fixed and the training set is used to interrogate the network for sensitized
units. The following example demonstrates the above ideas for a simple data clustering problem.

Example 3.4.1: Consider the set of unlabeled two-dimensional data points plotted in Figure 3.4.4.
Here, we demonstrate the typical solution (cluster formation) generated by a four-unit competitive net,
which employs Equations (3.4.2) and (3.4.5). No normalization of either the weight vectors or the input
vectors is used. During training, the input samples (data points) are selected uniformly at random from
the data set. The four weight vectors are initialized to random values close to the center of the plot in
Figure 3.4.4. Training with = 0.01 is performed for 1000 iterations. The resulting weight vector
trajectories are plotted in Figure 3.4.5(a) with the initial weights marked by "*". Note how one of the
units never evolved beyond its initial weight vector. This is because this unit never became a winner.
Each of the remaining three units evolved its weight vector to a distinct region in the input space. Since
is small, the weight vectors are shown to enter and stay inside "tiny" neighborhoods; the weight vectors
fluctuate inside their respective terminal neighborhoods. These fluctuations are amplified if a relatively
larger ( = 0.05) value is used, as demonstrated in Figure 3.4.5(b). Finally, the trained network is
calibrated by assigning a unique cluster label for any unit which becomes a winner for one or more
training samples. Here, Equation (3.4.2) is used to determine the winning unit. During calibration, only
three units are found to ever become winners. Figure 3.4.6 depicts the result of the calibration process.
Here, the symbols "+", "o", and "" are used to tag the three winner units, respectively. The figure shows
a "+", "o", or "" printed at the exact position of the training sample x, if x causes the unit with label "+",
"o", or "" to be a winner, respectively. We should note that the exact same clusters in Figure 3.4.6 are
generated by the net with = 0.05. The cluster labeled "" looks interesting because of its obvious
bimodal structure. Intuitively speaking, one would have expected this cluster to be divided by the
competitive net into two distinct clusters. In fact, if one carefully looks at Figure 3.4.5(b), this is what
the net attempted to do, but failed. However, very few simulations out of a large number of simulations
which were attempted, but are not shown here, did in fact result in this "intuitive" solution. In Chapter
4, the competitive learning rule is Equation (3.4.5) is shown to correspond to stochastic gradient

descent on the criterion , where is the weight vector of the winner


unit. Therefore, one may explain the three-cluster solution in this example as representing a suboptimal
solution which corresponds to a local minimum of J(w).

Figure 3.4.4. Unlabeled two-dimensional data used in training the simple competitive net of Example
3.4.1.

(a)

(b)

Figure 3.4.5. Weight vector evolution trajectories for a four-unit competitive net employing Equations
(3.4.2) and (3.4.5). These trajectories are shown superimposed on the plane containing the training
data. A "*" is used to indicate the initial setting of the weights for each of the four units. (a) Learning
rate equals 0.01, and (b) learning rate equals 0.05.
Figure 3.4.6. A three-cluster solution for the data shown in Figure 3.4.4. This solution represents a
typical cluster formation generated by the simple four-unit competitive net of Example 3.4.1.

3.4.2 Vector Quantization

One of the common applications of competitive learning is adaptive vector quantization for data
compression (e.g., speech and image data). Here, we need to categorize a given set of xk data points
(vectors) into m "templates" so that later one may use an encoded version of the corresponding
template of any input vector to represent the vector, as opposed to using the vector itself. This leads to
efficient quantization (compression) for storage and for transmission purposes (albeit at the expense of
some distortion).

Vector quantization is a technique whereby the input space is divided into a number of distinct regions,
and for each region a "template" (reconstruction vector) is defined (Linde et al., 1980; Gray, 1984).
When presented with a new input vector x, a vector quantizer first determines the region in which the
vector lies. Then, the quantizer outputs an encoded version of the reconstruction vector wi representing
that particular region containing x. The set of all possible reconstruction vectors wi is usually called the
"codebook" of the quantizer. When the Euclidean distance similarity measure is used to decide on the
region to which the input x belongs, the quantizer is called Voronoi quantizer. The Voronoi quantizer
partitions its input space into Voronoi cells (Gray, 1984), each cell is represented by one of the
reconstruction vectors, wi. The ith Voronoi cell contains those points of the input space that are closest
(in a Euclidean sense) to the vector wi than to any other vector wj, j i. Figure 3.4.7 shows an example
of the input space partitions of a Voronoi quantizer with four reconstruction vectors.

Figure 3.4.7. Input space partitions realized by a Voronoi quantizer with four reconstruction vectors.
The four reconstruction vectors are shown as filled circles in the figure.

The competitive learning rule in Equation (3.4.5) with a winning unit determination based on the
Euclidean distance as in Equation (3.4.3) may now be used in order to allocate a set of m reconstruction

vectors , i = 1, 2, ..., m, to the input space of n-dimensional vectors x. Let x be distributed


according to the probability density function p(x). Initially, we set the starting values of the vectors wi
to the first m randomly generated samples of x. Additional samples x are then used for training. Here,
the learning rate in Equation (3.4.5) is selected as a monotonically decreasing function of the number
of iterations k. Based on empirical results, Kohonen (1989) conjectured that, in an average sense, the
asymptotic local point density of the wi's (i.e., the number of wi falling in a small volume of Rn
centered at x) obtained by the above competitive learning process takes the form of a continuous,
monotonically increasing function of p(x). Thus, this competitive learning algorithm may be viewed as
an "approximate" method for computing the reconstruction vectors wi in an unsupervised manner.

Kohonen (1989) designed supervised versions of vector quantization (called learning vector
quantization, LVQ) for adaptive pattern classification. Here, class information is used to fine tune the
reconstruction vectors in a Voronoi quantizer, so as to improve the quality of the classifier decision
regions. In pattern classification problems, it is the decision surface between pattern classes and not the
inside of the class distribution, which should be described most accurately. The above quantizer
process can be easily adapted in order to optimize the placement of the decision surface between
different classes. Here, one would start with a trained Voronoi quantizer and calibrate it using a set of
labeled input samples (vectors). Each calibration sample is assigned to that wi which is closest. Each wi
is then labeled according to the majority of classes represented among those samples which have been
assigned to wi. Here, the distribution of the calibration samples to the various classes, as well as the
relative numbers of the wi assigned to these classes must comply with the apriori probabilities of the
classes, if such probabilities are known. Next, the tuning of the decision surfaces is accomplished by
rewarding correct classifications and punishing incorrect ones. Let the training vector xk belong to the
class cj. Assume that the closest reconstruction vector wi to xk carries the label of class cl. Then, only
vector wi is updated according to the following supervised rule (LVQ rule):

(3.4.7)

where k is assumed to be a monotonically decreasing function of the number of iterations k. After


convergence, the input space Rn is again partitioned by a Voronoi tessellation corresponding to the
tuned wi vectors. The primary effect of the reward/punish rule in Equation (3.4.7) is to minimize the
number of misclassifications. At the same time, however, the vectors wi are pulled away from the
zones of class overlap where misclassifications persist.

The convergence speed of LVQ can be improved if each vector wi has its own adaptive learning rate
given by (Kohonen, 1990):

(3.4.8)

This recursive rule causes i to decrease if wi classifies xk correctly. Otherwise, i increases. Equations
(3.4.7) and (3.4.8) define what is known as an "optimized learning rate" LVQ (OLVQ). Another
improved algorithm named LVQ2 has also been suggested by Kohonen et al. (1988) which approaches
the performance predicted by Bayes decision theory (Duda and Hart, 1973).

Some theoretical aspects of competitive learning are considered in the next chapter. More general
competitive networks with stable categorization behavior have been proposed by Carpenter and
Grossberg (1987 a, b). One of these networks, called ART1, is described in Chapter 6.

3.5 Self-Organizing Feature Maps: Topology Preserving Competitive Learning

Self-organization is a process of unsupervised learning whereby significant patterns or features in the


input data are discovered. In the context of a neural network, self-organization learning consists of
adaptively modifying the synaptic weights of a network of locally interacting units in response to input
excitations and in accordance with a learning rule, until a final useful configuration develops. The local
interaction of units means that the changes in the behavior of a unit only (directly) affects the behavior
of its immediate neighborhood. The key question here is how could a useful configuration evolve from
self-organization. The answer lies essentially in a naturally observed phenomenon whereby global
order can arise from local interactions (Turing, 1952). This phenomenon applies to neural networks
(biological and artificial) where many originally random local interactions between neighboring units
(neurons) of a network couple and coalesce into states of global order. This global order leads to
coherent behavior, which is the essence of self-organization.
In the following, a modified version of the simple competitive learning network discussed in Section
3.4.1 is presented, which exhibits self-organization features. This network attempts to map a set of
input vectors xk in Rn onto an array of units (normally one- or two-dimensional) such that any
topological relationships among the xk patterns are preserved and are represented by the network in
terms of a spatial distribution of unit activities. The more related two patterns are in the input space, the
closer we expect the position in the array of the two units representing these patterns. In other words, if
x1 and x2 are "similar" or are topological neighbors in Rn, and if r1 and r2 are the location of the
corresponding winner units in the net/array, then the Euclidean distance ||r1 − r2|| is expected to be
small. Also, ||r1 − r2|| approaches zero as x1 approaches x2. The idea is to develop a topographic map of
the input vectors, so that similar input vectors would trigger nearby units. Thus, a global organization
of the units is expected to emerge.

An example of such topology preserving self-organizing mappings which exists in animals is the
somatosensory map from the skin onto the somatosensory cortex, where there exists an image of the
body surface. The retinotopic map from the retina to the visual cortex is another example. It is believed
that such biological topology preserving maps are not entirely preprogrammed by the genes and that
some sort of (unsupervised) self-organizing learning phenomenon exists that tune such maps during
development. Two early models of topology preserving competitive learning were proposed by von der
Malsburg (1973) and Willshaw and von der Malsburg (1976) for the retinotopic map problem. In this
section, we present a detailed topology preserving model due to Kohonen (1982a) which is commonly
referred to as the self-organizing feature map (SOFM).

3.5.1 Kohonen's SOFM

The purpose of Kohonen's self-organizing feature map is to capture the topology and the probability
distribution of input data (Kohonen, 1982a and 1989). This model generally involves an architecture
consisting of a two-dimensional structure (array) of linear units, where each unit receives the same
input xk, xk Rn. Each unit in the array is characterized by an n-dimensional weight vector. The ith unit
weight vector wi is sometimes viewed as a position vector which defines a "virtual position" for unit i
in Rn. This, in turn, will allow us to interpret changes in wi as movements of unit i. However, one
should keep in mind that no physical movements of units take place.

The learning rule is similar to that of simple competitive learning in Equation (3.4.5) and is defined by:

(3.5.1)

where i* is the index of the winner unit. The winner unit is determined according to the Euclidean
distance as in Equation (3.4.3), with no weight vector normalization. The major difference between this
update rule and that of simple competitive learning is the presence of the neighborhood function

in the former. This function is very critical for the successful preservation of topological

properties. It is normally symmetric [i.e., ] with values close to one for

units i close to i* and decreases monotonically with the Euclidean distance between units i
and i* in the array.

At the onset of learning, defines a relatively large neighborhood whereby all units in the
net are updated for any input xk. As learning progresses, the neighborhood is shrunk down until it
ultimately goes to zero, where only the winner unit is updated. The learning rate also must follow a
monotonically decreasing schedule in order to achieve convergence. One may think of the initial large
neighborhood as effecting an exploratory global search which is then continuously refined to a local
search as the variance of approaches zero. The following is one possible choice for

(3.5.2)

where the variance 2 controls the width of the neighborhood. Here, and may be set proportional to
with k representing the learning step. Ritter and Schulten (1988a) proposed the following
update schedules for

(3.5.3)

and for

(3.5.4)

with 0(0) and f(f) controlling the initial and final values of the learning rate (neighborhood width),
respectively, and kmax is the maximum number of learning steps anticipated.

There is no theory to specify the parameters of these learning schedules for arbitrary training data.

However, practical values are 0 < 0 1 (typically 0.8), f << 1, 0 (md is the number of units along the
largest diagonal of the array), or an equivalent value that will permit (i, i*, k = 0) to reach all units when
i* is set close to the center of the array, and f = 0.5. Finally kmax is usually set to 2 or more orders of
magnitude larger than the total number of units in the net.

Let the input vector x be a random variable with a stationary probability density function p(x). Then,
the basis for Equation (3.5.1) and various other self-organizing algorithms is captured by the following
two step process: (1) Locate the best-matching unit for the input vector x; (2) Increase matching at this
unit and its topological neighbors. The computation achieved by a repetitive application of this process
is rather surprising and is captured by the following proposition due to Kohonen (1989): "The wi
vectors tend to be ordered according to their mutual similarity, and the asymptotic local point density
of the wi, in an average sense, is of the form g(p(x)), where g is some continuous, monotonically
increasing function." This proposition is tested by the following experiments.

3.5.2 Examples of SOFMs

Figure 3.5.1 shows an example of mapping uniform random points inside the positive unit square onto
a 10 10 planar array of units with rectangular topology. Here, each unit i has four immediate
neighboring units, which are located symmetrically at a distance of 1 from unit i. In general, however,
other array topologies such as hexagonal topology may be assumed. In the figure, the weight vectors of
all units are shown as points superimposed on the input space. Connections are shown between points
corresponding to topologically neighboring units in the array, for improved visualization. Thus, a line
connecting two weight vectors wi and wj is only used to indicate that the two corresponding units i and
j are adjacent (immediate neighbors) in the array. The weights are initialized randomly near (0.5, 0.5)
as shown in Figure 3.5.1 (a). During training the inputs to the units in the array are selected randomly
and independently from a uniform distribution p(x) over the positive unity square. Figures 3.5.1 (b)-(d)
show snapshots of the time evolution of the feature map. Initially, the map untangles and orders its
units as in parts (b) and (c) of the figure. Ultimately, as shown in Figure 3.5.1(d), the map spreads to
fill all of the input space except for the border region which shows a slight contraction of the map.
Figures 3.5.2 through 3.5.4 show additional examples of mapping uniformly distributed points from a
disk, a triangle, and a hollow disk, respectively, onto a 15 15 array of units. The initial weight
distribution is shown in part (a) of each of these figures. Learning rule parameters are provided in the
figure captions.

The feature maps in Figures 3.5.1 through 3.5.4 all share the following properties. First, it is useful to
observe that there are two phases in the formation of the map: an ordering phase and a convergence
phase. The ordering phase involves the initial formation of the correct topological ordering of the
weight vectors. This is roughly accomplished during the first several hundred iterations of the learning
algorithm. The fine tuning of the map is accomplished during the convergence phase, where the map
converges asymptotically to a solution which approximates p(x). During this phase, the neighborhood
width 2 and the learning rate take on very small values, thus contributing to slow convergence. For
good results, the convergence phase may take 10 to 1000 times as many steps as the ordering phase.
Another common property in the above SOFMs is a border aberration effect which causes a slight
contraction of the map, and a higher density of weight vectors at the borders. This aberration is due to
the "pulling" by the units inside the map. The important thing, though, is to observe how in all cases
the w vectors are ordered according to their mutual similarity, which preserves the topology of the
Euclidean input space. Another important result is that the density of the weight vectors in the weight
space follows the uniform probability distribution of the input vectors. If the distribution p(x) had been
non-uniform we would have found more grid points of the map where p(x) was high (this is explored in
Problem 3.5.5).

(a) Iteration 0 (b) Iteration 100

(c) Iteration 10,000 (d) Iteration 50,000

Figure 3.5.1. Mapping uniform random points from the positive unit square using 10 10 planar array of
units (0 = 0.8, f = 0.01, 0 = 5, f = 1, and kmax = 50,000).

(a) Iteration 0 (b) Iteration 10,000

(c) Iteration 30,000 (d) Iteration 70,000

Figure 3.5.2. Mapping uniform random points from a disk onto a 15 15 array of units (0 = 0.8, f = 0.01,
0 = 8, f = 1, and kmax = 70,000).

(a) Iteration 0 (b) Iteration 1,000


(c) Iteration 10,000 (d) Iteration 50,000

Figure 3.5.3. Mapping uniform random points from a triangle onto a 15 15 array of units (0 = 0.8,
f = 0.01, 0 = 8, f = 1, and kmax = 50,000).

(a) Iteration 0 (b) Iteration 100

(c) Iteration 20,000 (d) Iteration 50,000

Figure 3.5.4. Mapping uniform random points from a hollow disk onto a 15 15 array of units (0 = 0.8,
f = 0.01, 0 = 8, f = 1, and kmax = 50,000).

(a) Iteration 0 (b) Iteration 5000

(c) Iteration 20,000 (d) Iteration 40,000

Figure 3.5.5. Mapping from a hollow disk region onto a linear array (chain) of sixty units using a
SOFM.

Mappings from higher to lower dimensions are also possible with a SOFM, and are in general useful
for dimensionality reduction of input data. An illustrative example is shown in Figure 3.5.5 where a
mapping from a hollow disk region to a linear array (chain) of sixty units is performed using Kohonen's
feature map learning in Equation (3.5.1). Here, the units self-organize such that they cover the largest
region possible (space filling curve).

Two additional interesting simulations of SOFMs, due to Ritter and Schulten (1988a), are considered
next. The first simulation involves the formation of a somatosensory map between the tactile receptors
of a hand surface and an "artificial cortex" formed by a 30 30 planar square array of linear units. The
training set consisted of the activity patterns of the set of tactile receptors covering the hand surface.
Figure 3.5.6 depicts the evolution of the feature map from an initial random map. Here, random points
are selected according to the probability distributions defined by regions D, L, M, R, and T shown in
Figure 3.5.6 (a). It is interesting to note the boundaries developed [shown as dotted regions in the maps
of Figure 3.5.6 (c) and (d)] between the various sensory regions. Also note the correlations between the
sensory regions sizes of the input data and their associated regions in the converged map.

(a) (b)

(c) (d)

Figure 3.5.6. An example of somatosensory map formation on a planar array of 30 x 30 units. (From H.
Ritter and K. Schulten, 1988, Kohonen's Self-Organizing Maps: Exploring Their Computational
Capabilities, Proceedings of the IEEE International Conference on Neural Networks, vol. I, pp. 109-
116, © 1988 IEEE.)

The second simulation is inspired by the work of Durbin and Willshaw (1987) (see also Angeniol et al.,
1988, and Hueter, 1988) related to solving the traveling salesman problem (TSP) by an elastic net.
Here, 30 random city locations are chosen in the unit square with the objective of finding the path with
minimal length that visits all cities, where each city is visited only once. A linear array (chain) of 100
units is used for the feature map. The initial neighborhood size used is 20 and the initial learning rate is
1. Figure 3.5.7 (a) shows the 30 randomly chosen city locations (filled circles) and the initial location
and shape of the chain (open squares). The generated solution path is shown in Figures 3.5.7 (b), (c),
and (d) after 5,000, 7,500, and 10,000 steps, respectively.

(a) (b)

(c) (d)

Figure 3.5.7. Solving 30-city TSP using a SOFM consisting of a "chain" of one hundred units. Filled
circles correspond to fixed city locations. Open squares correspond to weight vector coordinates of
units forming the chain.
Applications of SOFMs can be found in many areas, including trajectory planning for a robot arm
(Kohonen, 1989; Ritter and Schulten, 1988a), and combinatorial optimization (Angeniol et al., 1988;
Hueter, 1988 ; Ritter and Schulten, 1988a). In practical applications of self-organizing feature maps,
input vectors are often high dimensional. Such is the case in speech processing. We conclude this
section by highlighting an interesting practical application of feature maps as phonotopic maps for
continuous speech processing (Kohonen, 1988).

In speech processing (transcription, recognition, etc.) a microphone signal is first digitally preprocessed
and converted into a 15 channel spectral representation which covers the range of frequencies from 200
Hz. to 5 KHz. Let these channels together constitute the 15-dimensional input vector x(t) to a feature
map (normally, each vector is preprocessed by subtracting the average from all components and
normalizing its length). Here, an 8 × 12 hexagonal array of units is assumed as depicted in Figure 3.5.8;
note how each unit in this array has six immediate neighbors. The self-organizing process of Equation
(3.5.1) is used to create a "topographic," two-dimensional map of speech elements onto the hexagonal
array.

Figure 3.5.8. Phonotopic feature map for Finnish speech. Units, shown as circles, are labeled with the
symbols of the phonemes to which they adapted to give the best responses. Some units are shown to
respond to two phonemes. (From T. Kohonen, 1988, The "Neural" Phonetic Typewriter, IEEE
Computer Magazine, 21(3), pp. 11-22, © 1988 IEEE.)

The input vectors x(t), representing a short-time spectra of the speech waveform, are computed every
9.83 milliseconds. These inputs are presented in their natural order as inputs to the units in the array.
After training on sufficiently large segments of continuous speech (Finnish speech in this case) and
subsequent calibration of the resulting map with standard reference phoneme spectra, the phonotopic
map of Figure 3.5.8 emerges. Here, the units are labeled with the symbols of the phonemes to which
they "learned" to give the best responses. A striking result is that the various units in the array become
sensitized to spectra of different phonemes and their variations in a two-dimensional order, although
teaching was not done using the phonemes. One can attribute this to the fact that the input spectra are
clustered around phonemes, and the self-organizing process finds these clusters. Some of the units in
the array have double labels which implies units that respond to two phonemes. For example, the
distinction of 'k', 'p', and 't' from this map is not reliable [a solution for such a problem is described in
Kohonen (1988)]. This phonotopic map can be used as the basis for isolated-word recognition. Here,
the signal x(t) corresponding to an uttered word induces an ordered sequence of responses in the units
in the array. Figure 3.5.9 shows the sequence of responses obtained from the phonotopic map when the
Finnish word "humpilla" (name of a place) was uttered. This sequence of responses defines a phonemic
transcription of the uttered word. Then this transcription can be recognized by comparing it with
reference transcriptions collected from a great many words. Also, these phonemic trajectories provide
the means for the visualization of the phonemes of speech, which may be useful for speech training
therapy. Further extensions of the use of SOFMs in automatic speech recognition can be found in
Tattersal et al. (1990).
Figure 3.5.9. Sequence of the responses of units obtained from the phonotopic map when the Finnish
word "humpilla" was uttered. The arrows correspond to intervals of 9.83 milliseconds. (From T.
Kohonen, 1988, The "Neural" Phonetic Typewriter, IEEE Computer Magazine, 21(3), pp. 11-22, ©
1988 IEEE.)

Kohonen (1988) employed the phonotopic map in a speech transcription system implemented in
hardware in a PC environment. This system is used as a "phonetic typewriter" which can produce text
from arbitrary dictation. When trained on speech from half a dozen speakers, using office text, names,
and the most frequent words of the language (this amounts to several thousand words), the phonetic
typewriter had a transcription accuracy between 92 and 97 percent, depending on the speaker and
difficulty of text. This system can be adapted on-line to new speakers by fine tuning the phonotopic
map on a dictation of about 100 words.

3.6 Summary

This chapter describes a number of basic learning rules for supervised, reinforcement, and
unsupervised learning. It presents a unifying view of these learning rules for the single unit setting.
Here, the learning process is viewed as steepest gradient-based search for a set of weights that
optimizes an associated criterion function. This is true for all learning rules regardless of the usual
taxonomy of these rules as supervised, reinforcement, or unsupervised. Preliminary characterizations of
some learning rules are given, with the in depth mathematical analysis deferred to the next chapter.

Various forms of criterion functions are discussed including the perceptron, SSE, MSE, Minkowski-r,
and relative entropy criterion functions. The issue of which learning rules are capable of finding
linearly separable solutions led us to identify the class of well-formed criterion functions. These
functions, when optimized by an iterative search process, guarantee that the search will enter the region
in the weight space which corresponds to linearly separable solutions.

Steepest gradient-based learning is extended to the stochastic unit, which serves as the foundation for
reinforcement learning. Reinforcement learning is viewed as a stochastic process which attempts to
maximize the average reinforcement.

Unsupervised learning is treated in the second half of the chapter. Here, Hebbian learning is discussed
and examples of stable Hebbian-type learning rules are presented along with illustrative simulations. It
is pointed out that Hebbian learning applied to a single linear unit tends to maximize the unit's output
variance. Finally, simple, single layer networks of multiple interconnected units are considered in the
context of competitive learning, learning vector quantization, principal component analysis, and self-
organizing feature maps. Simulations are also included which are designed to illustrate the powerful
emerging computational properties of these simple networks and their application. It is demonstrated
that local interactions in a competitive net can lead to global order. A case in point is the SOFM where
simple incremental interactions among locally neighboring units lead to a global map which preserves
the topology and density of the input data.

Table 3.1 summarizes all learning rules considered in this chapter. It gives a quick reference to the
learning equations and their associated criterion functions, appropriate parameter initializations, type of
unit activation function employed, and some remarks on convergence behavior and the nature of the
obtained solution.
SUMMARY OF BASIC LEARNING RULES
LEARNING RULE
(type) ACTIVATION
CRITERION FUNCTIONa LEARNING VECTORb CONDITIONS REMAR
FUNCTIONc
Finite converge
time if training
Perceptron Rule linearly separab
(supervised) stays bounded
arbitrary trainin

b>0
k satisfies:

1.

Perceptron Rule
with variable Converges to z
2. training set is line
learning rate separable. Finite
and fixed
margin convergence if
(supervised) where is a finite p
constant.

3.
Finite converge
the solution
Mays' Rule
(supervised) b>0 if
training set is l
separable.
Finite converge
training set is l
separable. Plac
region that tend
Butz's Rule minimize the
(supervised) probability of e
>0
non-linearly se
cases.

Widrow-Hoff Converges in th
Rule square to the m
(-LMS) SSE or LMS so
(supervised) ||xi|| = ||xj|| for all
Converges in th
-LMS square to the m
(supervised) SSE or LMS so
k satisfies:
mean opera
each learning s
Stochastic 1. training vector
drawn at random.)
-LMS Rule Converges in the m
2. square to the mini
(supervised) or LMS solution.

3.
Converges to th
Correlation Rule minimum SSE
(supervised) if the xk's are mu
orthonormal.

Extends the -L
Delta Rule y = f(net) where f is a to cases with
(supervised) sigmoid function. differentiable n
linear activatio
1 < r < 2 for ps
Gaussian distri
p(x) with prono
Minkowski-r y = f(net) where f is a tails. r = 2 give
Delta rule sigmoid function. rule. r = 1 arise
(supervised) p(x) is a Laplac
distribution.

Relative Eliminates the


Entropy Delta y = tanh( net) suffered by the
Rule rule. Converge
(supervised) linearly separab
solution if one

Margin

Weight vector: bi's can only incre


their initial values
Converges to a ro
AHK I solution for linear
(supervised) with margin vector
separable problem

Margin

bi's can take any p


AHK II = value. Converges
(supervised) with margin vector solution for
linearly separable
Weight vector:
Margin

Converges for
separable as we
non-linearly se
cases. It autom
identifies and d
Weight vector: the critical poin
AHK III affecting the no
(supervised) with margin vector separability, an
in a solution w
tends to minim
misclassificatio

Stochastic activation:
Delta Rule with Performance in
Stochastic average is equi
neuron the delta rule a
(supervised) a unit with
deterministic
activation:

Simple
Associative wk evolves so as t
Reward/Penalty Stochastic activation maximize the aver
Rule (as above) reinforcement sign
(reinforcement)

Hebbian Rule wit


pointing in the dir
(unsupervised) c(1) (see comment

Converges in th
Does not have an exact J(w). to
Oja's Rule However, this rule tends to
(unsupervised) maximize <y2> subject to the
constraint ||w|| = 1. . whic
maximizes <y2
Converges in th
to
Yuille's et al.
Rule
(unsupervised) w
the direction of c(1
maximizes <y2>
Maximizes <y2
Linsker's Rule
to . Co
to w* whose comp
(unsupervised) are clamped at w−
>0 when is large.

Converges in th
to

Hassoun's Rule
(unsupervised) For this rule reduces to:
w* parallel to c(1)
(see comment e) , w*
approaches c(1) (se
4.3.5 for details).
k satisfies:

1.
Standard
Competitive
Learning Rule Converges to a
(unsupervised) minima of J
i* is the index of the winning unit: 2. representing so
clustering
(see comment e) configuration.

3.
k and k evolve according to: The weight vec
or
evolve to a solu
Kohonen's which tends to
Feature Map the topology of
Rule input space. Th
point density o
solution is of th
(unsupervised) g(p(x)), where
continuous
monotonically
increasing func
p(x) is a station
(see comment e)
probability den
function govern

dc(i) is the ith normalized eigenvector of the autocorrelation matrix C with a

aNote: corresponding eigenvalue ( and ).

bThe general form of the learning equation is: where


eThe criterion functions associated with these rules are discussed in Chapter 4.
is the learning rate and is the learning vector.

c
Problems:

3.1.1 Show that the choice , in the convergence proof for the perceptron learning
rule, minimizes the maximum number of corrections k0 if w1 = 0.

*3.1.2 This problem explores an alternative convergence proof (Kung, 1993) for the perceptron rule in
Equation (3.1.2). Here, we follow the notation in Section 3.1.1.

a. Show that if xk is misclassified by a perceptron with weight vector wk, then

(1)

where w* is a solution which separates all patterns xk correctly.

b. Show that if we restrict to sufficiently small values, then wk+1 converges in a finite number of steps
(recall that no weight update is needed if all xk are classified correctly). Does wk+1 have to converge to the
particular solution w*? Explain.

c. Show that if xk is misclassified, then is minimized by setting to its optimal value

d. Show that the choice = opt guarantees the convergence of the perceptron learning rule in a finite number
of steps. In other words, show that if the choice = opt is made, then wk+1 in Equation (1) stops changing
after a finite number of corrections.

e. Show that the use of = opt in Equation (3.1.2) leads to the learning rule
(Note that this learning rule is impractical, since it requires a solution, w*, to be known!)

3.1.3 Show that the perceptron criterion function J in Equation (3.1.20) is piecewise linear. Next, consider
the four training pairs {−4, −1}, {−2, −1}, {−1, +1}, and {+1, +1}. Note that these pairs take the form {x, d}
where x, d R. The following is a guided exploration into the properties of the function J for this linearly
separable training set. This exploration should give some additional insights into the convergence process
of the perceptron learning rule.

a. Plot the criterion function J for a two-input perceptron with weights w1 and w2, over the range
and . Here, w2 is the weight associated with the bias input (assume a
bias of +1).

b. Identify the solution region in the weight space containing all linearly separable solutions w*. What is the
value of J in this region?

c. Based on the above results, describe the evolution process of the weight vector in Equation (3.1.23),
starting from an arbitrary initial weight vector.

d. For comparative purposes, plot the quadratic criterion function in Equation (3.1.25) with b = 0. Comment
on the differentiability of this function.

3.1.4 Show that the α -LMS rule in Equation (3.1.30) can be obtained from an incremental gradient descent
on the criterion function

† 3.1.5 For the simple two-dimensional linearly separable two-class problem in Figure P3.1.5, compute and
plot the dynamic margins (bi) vs. learning cycles using the AHK I learning rule with the following initial
values and parameters: w1 = 0, ρ 1 = 0.1, ρ 2 = 0.05, and initial margins of 0.1. Repeat using the AHK II
rule. Compare the convergence speed and quality of the solution of these two rules.

Figure P3.1.5. A simple two-class linearly separable pattern for Problem 3.1.5.

3.1.6 Draw the two decision surfaces for Problem 3.1.5 and compare them to the decision surfaces
generated by the perceptron rule, µ -LMS learning rule (use µ = 0.05), and Butz's rule (use = 0.1 and a
reinforcement factor ). Assume w1 = 0 for these rules.

† 3.1.7 Repeat Problem 3.1.5 for the two-class problem in Figure P3.1.7, using the AHK III rule.
Figure P3.1.7. A simple two-class nonlinearly separable training set for Problem 3.1.7.

3.1.8 Draw the decision surface for Problem 3.1.7 and compare it to the decision surfaces generated by the
perceptron rule, µ -LMS learning rule (use µ = 0.05), and Butz's rule (use = 0.1 and a reinforcement
factor ). Assume w1 = 0 for these rules.

3.1.9 Check the "well-formedness" of the criterion functions of the following learning rules.

a. Mays' rule.

b. µ -LMS rule.

c. AHK I rule.

3.1.10 a. Is there a value for r for which the Minkowski-r criterion function is well-formed?

b. Is the relative entropy criterion function well-formed?

3.1.11 Consider the training set {xk, dk}, k = 1, 2, ..., m, with . Define a stability measure for
pattern k as

where w is a perceptron weight vector which is constrained to have a constant length .

Employing statistical mechanics arguments, it is possible to show (Gordon et al., 1993) that the mean
number of errors made by the perceptron on the training set at "temperature" T (T is a monotonically
decreasing positive function of training time) is:

where b is an imposed stability factor specified by the user.


Employing gradient descent on J(w), show that an appropriate update rule for the ith weight is given by:

Give a qualitative analysis of the behavior of this learning rule and compare it to the correlation and AHK
III rules. Explain the effects of b and T on the placement of the separating hyperplane. How would this rule
behave for nonlinearly separable training sets?

3.1.12 Consider the learning procedure (Polyak, 1990):

where and . Discuss qualitatively the difference between this learning


procedure and the -LMS learning rule (Equation 3.1.35).

3.1.13 Consider the Minkowski-r criterion function in Equation (3.1.68). If no prior knowledge is available
about the distribution of the training data, then extensive experimentation with various r values must be
done in order to estimate an appropriate value for r. Alternatively, an automatic method for estimating r is
possible by adaptively updating r in the direction of decreasing E. Employ steepest gradient descent on E(r)
in order to derive an update rule for r.

† 3.2.1 Use the Arp rule in Equation (3.2.1) with = 1, to find the stochastic unit separating surface wTx = 0,
arrived at after 10, 50, 100 and 200 cycles through the training set in Figure P3.2.1. Start with

, and assume x3 = +1. Use reinforcement rk equal to +1 if xk is correctly classified, and


−1 otherwise. Assume + = 0.1 and = 0.01. Repeat by subjectively assigning a +1 or a −1 to rk based on
whether the movement of the separating surface after each presentation is good or bad, respectively. Use
+ = 0.6 and = 0.06 and plot the generated separating surface after the first twenty presentations.

Figure P3.2.1 Two-class linearly separable training set for Problem 3.2.1.
† 3.3.1 Consider a data set of two-dimensional vectors x generated as follows: x1 and x2 are distributed
randomly and independently according to the normal distributions N(0, 0.1) and N(0, 0.01), respectively.
Generate and plot 1000 data points x in the input plane. Compute and plot (in the input plane) the solution
trajectories wk generated by the normalized Hebbian rule, Oja's rule, and Yuille et al. rule upon training a
single linear unit with weight vector w on samples x drawn randomly and independently from the above
distribution. In your simulations, assume = 0.01 and stop training after 1500 iterations. Also, assume
w1 = [1.5, 1]T in all simulations. Verify graphically that, for large k, the average direction of wk is that of
the maximum data variance. Study, via simulation, the effects of larger learning rates (e.g., = 0.05, 0.1) and
various initial weight vectors on the wk trajectories.

3.3.2 Show that the average Oja rule has as its equilibria, the eigenvectors of C. Start by showing that

is the average of Oja's rule in Equation (3.3.6).

* 3.3.3 Starting from , show that for Oja's rule. Evaluate the
Hessian matrix at the equilibria found in Problem 3.3.2. Study the stability of these equilibria states
in terms of the eigenvalues of the Hessian matrix.

3.3.4 Show that the average weight change, , in Yuille et al. rule is given by steepest gradient
descent on the criterion function

3.3.5 Consider Sanger's rule (Equation 3.3.19) with m = 2 units. Assume that the first unit has already
converged to c(1), the principal eigenvector of C. Show that the second unit's update equation for the
average weight vector change is given by:

and that c(1) is not an equilibrium point.

* 3.3.6 Show that the average weight change of the ith unit for Sanger's rule is given by (Hertz et
al., 1991)

Now assume that the weight vectors for the first i − 1 units have already converged to their appropriate

eigenvectors, so wk = c(k) for k < i. Show that the first two terms in give the projection of onto
the space orthogonal to the first i − 1 eigenvectors of C. Employ these results to show that wi converges to
c(i).

3.3.7 Approximate the output of a sigmoid unit having the transfer characteristics y = tanh(wTx) by the first
four terms in a Taylor series expansion. Show that this unit approximates a third order unit if it is operated
near wTx = 0. Comment on the effects of the saturation region on the quality of the higher-order principal
component analysis realized when the unit is trained using Hebbian learning.
3.4.1 Let xk {0, 1}n and the initial weights , for all i, in a single layer competitive network.
Assume the following learning rule for the ith unit

Show that this rule preserves for all k.

† 3.4.2Consider the 200 sample two-dimensional data set {xk} generated randomly and independently as
follows: 50 samples generated according to the normal distribution N([−5, +5]T, 1), 75 samples generated
according to N([+5, +5]T, 2), and 75 samples generated according to a uniform distribution in the region

and −7.5 x2 −2.5. Use a five unit competitive net in order to discover the underlying clusters in
this data set, as in Example 3.4.1 [use Equations (3.4.5) and (3.4.2)]. Assume a learning rate = 0.01, and
start with random weight vectors distributed uniformly in the region |w1| 10 and |w2| 10. Stop training after
2000 iterations (steps); at each iteration, the vector xk is to be chosen from the data set at random. Depict
the evolution of the weight vectors of all five units as in Figure 3.4.5. Finally, plot the clusters discovered
by the net (as in Figure 3.4.6) and compare this solution to the real solution.

† 3.4.3 Repeat Problem 3.4.2 using the similarity measure in Equation (3.4.3), with no weight vector
normalization, for determining the winning unit. Make sure that you use the same initial weight vectors as
in the previous problem. Discuss any differences in the number and/or shape of the generated clusters,
compared to the solution in Problem 3.4.2.

† 3.4.4 Consider a Voronoi quantizer with the following 10 reconstruction vectors: [0, 0]T, [1, 0]T, [−1, 0]T,
[0, −1]T, [0, 1]T, [3, 3]T, [4, 3]T, [2, 3]T, [3, 2]T, and [3, 4]T.

a. Draw the input space partitions (Voronoi tessellation) realized by the above quantizer.

b. Starting with the Voronoi quantizer of part a, use the LVQ method described in Section 3.4.2 in order to
design a two-class classifier for data generated randomly according to the probability density functions
p1(x) = N([0, 0]T, 1) and p2(x) = N([3, 3]T, 2) for classes 1 and 2, respectively. Assume equal a priori
probability of the two classes. During training, use the adaptive learning rates in Equation (3.4.8), initialized

to . Stop training after 1000 iterations.

c. Draw the Voronoi tessellations realized by the weight vectors (reconstruction vectors) which resulted
from the LVQ training in part b. Compare these tessellations to the ones drawn in part a.

d. Draw the decision boundary for the classifier in part b.


*3.5.1 Consider the following criterion function, suitable for solving the TSP in an elastic net architecture
(Durbin and Willshaw, 1987):

where m is the number of units in the net and > 0. Here, xk represents the position of city k and wi
represents the ith stop.

a. Show that gradient descent on J leads to the update rule:

where

b. Give qualitative interpretations for the various terms in J and wi.

c. Show that J is bounded below, and that as 0 and m , J is minimized by the shortest possible tour.

†3.5.2 Consider the L-shaped region shown in Figure P3.5.2. Train a 2-dimensional Kohonen feature map
on points generated randomly and uniformly from this region. Assume a 15 × 15 array of units and the

following learning parameters, kmax = 80,000, 0 = 0.8, f = 0.01, , . Choose all initial
weights randomly inside the region. Display the trained net as in Figure 3.5.1 at times 0, 1000, 20,000,
40,000, and 80,000.
Figure P3.5.2. L-shaped region representing a uniform distribution of inputs for the feature map of
Problems 3.5.2 and 3.5.3.

† 3.5.3 Repeat problem 3.5.2 (but with 0 = 20) for a 1-dimensional chain consisting of 60 units and
initialized randomly inside the L-shaped region. Display the trained map as in Figure 3.5.5 at various
training times.

† 3.5.4 The self-organizing map simulation in Figure 3.5.5 employs 0 = 0.8, f = 0.01, , ,
and kmax = 70,000. Repeat the simulation and plot the map (chain) at iterations 50,000 and 70,000. (Note:
the chain configuration in Figure 3.5.5 (d) is not a stable configuration for the above parameters.) Repeat

the simulation with and compare the resulting chain configuration at 70,000 iterations to that in the
previous simulation. Discuss your results.

†3.5.5 Repeat the SOFM simulation in Figure 3.5.1 (see Section 3.5.3 for details) assuming
p(x) = N([0, 0]T, 0.1), |x1| 1, and |x2| 1. Use the learning parameters given in the caption of Figure 3.5.1.
What is the general shape of the point distribution p(w) of the weight vectors? Estimate the variance of
p(w). Is p(w) proportional to p(x)?
4. MATHEMATICAL THEORY OF
NEURAL LEARNING
4.0 Introduction

This chapter deals with theoretical aspects of learning in artificial neural networks. It investigates
mathematically, the nature and stability of the asymptotic solutions obtained using the basic supervised,
Hebbian and reinforcement learning rules, which were introduced in the previous chapter. Formal analysis
is also given for simple competitive learning and self-organizing feature map learning.

A unifying framework for the characterization of various learning rules is presented. This framework is
based on the notion that learning in general neural networks can be viewed as search, in a multidimensional
space, for a solution which optimizes a prespecified criterion function, with or without constraints. Under
this framework, a continuous-time learning rule is viewed as a first-order, stochastic differential
equation/dynamical system, whereby the state of the system evolves so as to minimize an associated
instantaneous criterion function. Approximation techniques are employed to determine, in an average sense,
the nature of the asymptotic solutions of the stochastic system. This approximation leads to an "average
learning equation" which, in most cases, can be cast as a globally, asymptotically stable gradient system
whose stable equilibria are minimizers of a well-defined average criterion function. Finally, and subject to
certain assumptions, these stable equilibria can be taken as the possible limits (attractor states) of the
stochastic learning equation.

The chapter also treats two important issues associated with learning in a general feedforward neural
network. These are learning generalization and learning complexity. The section on generalization presents
a theoretical method for calculating the asymptotic probability of correct generalization of a neural network
as a function of the training set size and the number of free parameters in the network. Here, generalization
in deterministic and stochastic nets are investigated. The chapter concludes by reviewing some significant
results on the complexity of learning in neural networks.

4.1 Learning as a Search Mechanism

Learning in an artificial neural network, whether it is supervised, reinforcement, or unsupervised, can be


viewed as a process of searching a multi-dimensional parameter space for a state which optimizes a
predefined criterion function J (J is also commonly referred to as an error function, a cost function, or an
objective function). In fact, all of the learning rules considered in Chapter 3, except Oja's rule (refer to
Table 3.1), have well-defined analytical criterion functions. These learning rules implement local search
mechanisms (i.e., gradient search) to obtain weight vector solutions which (locally) optimize the associated
criterion function. Therefore, it is the criterion function that determines the nature of a learning rule. For
example, supervised learning rules are designed so as to minimize an error measure between the network's
output and the corresponding desired output, unsupervised Hebbian learning is designed to maximize the
variance of the output of a given unit, and reinforcement learning is designed to maximize the average
reinforcement signal.

Supervised learning can be related to classical approximation theory (Poggio and Girosi, 1990a). Here, the
idea is to approximate or interpolate a continuous multivariate function g(x), from samples {x, g(x)}, by an
approximation function (or class of functions) G(w, x), where w Rd is a parameter vector with d degrees of
freedom, and x belongs to a compact set Rn. In this case, the set of samples {x, g(x)}, x , is referred to as a
training set. The approximation problem is to find an optimal parameter vector that provides the "best"
approximation of g on the set for a given class of functions G. Formally stated, we desire a solution w* Rd
such that
(4.1.1)

where the tolerance is a positive real number and is any appropriate norm. For the case where is the
Euclidean norm, it is well known that an appropriate criterion function J is

(4.1.2)

whose global minimum represents the minimum sum of square error (SSE) solution. The choice of
approximation function G, the criterion function J, and the search mechanism for w* all play critical roles in
determining the quality and properties of the resulting solution/approximation.

Usually we know our objective and this knowledge is translated into an appropriate criterion function J. In
terms of search mechanisms, gradient-based search is appropriate (and is simple to implement) for cases
where it is known that J is differentiable and bounded. Of course, if gradient search is used, we must be
willing to accept locally-optimal solutions. In general, only global search mechanisms, which are
computationally very expensive, may lead to global optimal solutions (global search-based learning is the
subject of Chapter 8). Among the above three factors, the most critical in terms of affecting the quality of
the approximation in Equation (4.1.1) is the choice of approximation function G.

In classical analysis, polynomials and rational functions are typically used for function approximation. On
the other hand, for artificial neural networks, the approximation functions are usually chosen from the class
of smooth sigmoidal-type functions, and the approximation is constructed as a superposition of such
sigmoidal functions. In general, there are two important issues in selecting an appropriate class of basis
functions: universality and generalization. By universality, we mean the ability of G to represent, to any
desired degree of accuracy, the class of functions g being approximated; in Chapter 2, we have established
the universality of feedforward layered neural nets. On the other hand, generalization means the ability of G
to correctly map new points x, drawn from the same underlying input distribution p(x), which were not seen
during the learning phase. Thus, an interesting question here is how well does a neural network compare to
other universal approximation functions in terms of generalization (some insight into answering this
question is given in Section 5.2.5). Later in this chapter, we will see that for feedforward neural networks,
the number of degrees of freedom (weights) in G plays a critical role in determining the degree of data
overfitting, which directly affects generalization quality.

Another way to control generalization is through criterion function conditioning. A "regularization" term
(Poggio and Girosi, 1990a) may be added to an initial criterion function according to

(4.1.3)

The ||P(w)||2 term in Equation (4.1.3) is used to imbed a priori information about the function g, such as
smoothness, invariance, etc. In this case, is the quantity to be minimized subject to the regularization

constraint that is kept "small" with the Lagrange multiplier determining the degree of

regularization. These ideas can also be extended to unsupervised learning where the term may
be thought of as a constraint satisfaction term; such a term may help condition the criterion so that the
search process is stabilized. Examples of this regularization strategy have already been encountered in the
unsupervised Hebbian-type learning rules presented in Chapter 3 [e.g., refer to Equation (3.3.8)].
4.2 Mathematical Theory of Learning in a Single Unit Setting

In this section, instead of dealing separately with the various learning rules proposed in the previous
chapter, we seek to study a single learning rule, called the general learning equation (Amari, 1977a and
1990), which captures the salient features of several of the different single-unit learning rules. Two forms of
the general learning equation will be presented: a discrete-time version, in which the weight vector evolves
according to a discrete dynamical system of the form w(k+1) = g(w(k)), and a continuous-time version, in

which the weight vector evolves according to a smooth dynamical system of the form .
Statistical analysis of the continuous-time version of the general learning equation is then performed for
selected learning rules, including correlation, LMS, and Hebbian learning rules.

4.2.1 General Learning Equation

Consider a single unit which is characterized by a weight vector w Rn, an input vector x Rn, and, in some
cases, a scalar teacher signal z. In a supervised learning setting, the teacher signal is taken as the desired
target associated with a particular input vector. The input vector (signal) is assumed to be generated by an
environment or an information source according to the probability density p(x, z), or p(x) if z is missing (as
in unsupervised learning). Now, consider the following discrete-time dynamical process which governs the
evolution of the unit's weight vector w

(4.2.1)

and the continuous-time version

(4.2.2)

where and are positive real, and . Here, r(w, x, z) is referred to as a "learning signal." One can
easily verify that the above two equations lead to discrete-time and continuous-time versions, respectively,
of the perceptron learning rule in Equation (3.1.2) if = 0, y = sgn(wTx), and r(w, x, z) = z - y (here z is taken
as bipolar binary). The -LMS (or Widrow-Hoff) rule of Equation (3.1.35) can be obtained by setting = 0
and r(w, x, z) = z - wTx in Equation (4.2.1). Similarly, substituting , = 1, and r(w, x, z) = z in
Equation (4.2.1) lead to the simple correlation rule in Equation (3.1.50), and = 0 and r(w, x, z) = y leads to
the Hebbian rule in Equation (3.3.1). In the remainder of this section, Equation (4.2.2) is adopted and is
referred to as the "general learning equation." Note that in Equation (4.2.2) the state w* = 0 is an
asymptotically stable equilibrium point if either r(w, x, z) and/or x are identically zero. Thus, the term -w in
Equation (4.2.2) plays the role of a "forgetting term" which tends to "erase" those weights not receiving
sufficient reinforcement during learning.

From the point of view of analysis, it is useful to think of Equation (4.2.2) as implementing a fixed
increment steepest gradient descent search of an instantaneous criterion function J, or formally,

(4.2.3)

For the case r(w, x, z) = r(wTx, z) = r(u, z), the right-hand side of Equation (4.2.2) can be integrated to yield

(4.2.4)
where = . This type of criterion function (which has the classic form of a potential function) is
appropriate for learning rules such as perceptron, LMS, Hebbian, and correlation rules. In the most general
case, however, r(w, x, z) r(wTx, z), and a suitable criterion function J satisfying Equation (4.2.3) may not
readily be determined (or may not even exist). It is interesting to note that finding the equilibrium points w*
of the general learning equation does not require the knowledge of J explicitly if the gradient of J is known.

The criterion function J in Equation (4.2.4) fits the general form of the constrained criterion function in
Equation (4.1.3). Therefore, we may view the task of minimizing J as an optimization problem with the

objective of maximizing subject to regularization which penalizes solution vectors

w* with large norm. It is interesting to note that by maximizing , one is actually maximizing the
amount of information learned from a given example pair {x, z}. In other words, the general learning
equation is designed so that it extracts the maximum amount of "knowledge" present in the learning signal
r(w, x, z).

4.2.2 Analysis of the Learning Equation

In a stochastic environment where the information source is ergodic, the sequence of inputs x(t) is an
independent stochastic process governed by p(x). The general learning equation in Equation (4.2.2) then
becomes a stochastic differential equation or a stochastic approximation algorithm. The weight vector w is
changed in random directions depending on the random variable x. From Equation (4.2.3), the average
value of becomes proportional to the average gradient of the instantaneous criterion function J.
Formally, we write

(4.2.5)

where implies averaging over all possible inputs x with respect to the probability distribution p(x). We
will refer to this equation as the "average learning equation." Equation (4.2.5) may be viewed as a steepest

gradient descent search for w* which (locally) minimizes the expected criterion function, , because the
linear nature of the averaging operation allows us to express Equation (4.2.5) as

(4.2.6)

It is interesting to note that finding w* does not require the knowledge of J explicitly if the gradient of J is
known. Equation (4.2.6) is useful from a theoretical point of view in determining the equilibrium state(s)
and in characterizing the stochastic learning equation [Equation (4.2.2)] in an "average" sense. In practice,
the stochastic learning equation is implemented and its average convergence behavior is characterized by
the "average learning equation" given as

= (4.2.7)

The gradient system in Equation (4.2.6) has special properties that makes its dynamics rather simple to
analyze. First, note that the equilibria w* are solutions to <J> = 0. This means that the equilibria w* are
local minima, local maxima, and/or saddle points of <J>. Furthermore, it is a well established result that,
for any > 0, these local minima are asymptotically stable points (attractors) and that the local maxima are
unstable points (Hirsch and Smale, 1974). Thus, one would expect the stochastic dynamics of the system in
Equation (4.2.3), with sufficiently small , to approach a local minimum of <J>.
In practice, discrete-time versions of the stochastic dynamical system in Equation (4.2.3) are used for
weight adaptation. Here, the stability of the corresponding discrete-time average learning equation

(discrete-time gradient system) is ensured if 0 < < , where max is the largest eigenvalue of the
Hessian matrix H = J, evaluated at the current point in the search space (the proof of this statement is
outlined in Problem 4.3.8). These discrete-time "learning rules" and their associated average learning
equations have been extensively studied in more general context than that of neural networks. The book by
Tsypkin (1971) gives an excellent treatment of these iterative learning rules and their stability.

4.2.3 Analysis of Some Basic Learning Rules

By utilizing Equation (4.2.7), we are now ready to analyze some basic learning rules. These are the
correlation, LMS, and Hebbian learning rules.

Correlation Learning

Here, r(w, x, z) = z, which represents the desired target associated with the input x. From Equation (4.2.2)
we have the stochastic equation

(4.2.8)

which leads to the average learning equation

(4.2.9)

Now, by setting = 0, one arrives at the (only) equilibrium point

(4.2.10)

The stability of w* may now be systematically studied through the "expected" Hessian matrix < H(w*) >

which is computed, by first employing Equations (4.2.5) and (4.2.9) to identify , as

(4.2.11)

This equation shows that the Hessian of <J> is positive definite; i.e., its eigenvalues are strictly positive or,

equivalently, the eigenvalues of are strictly negative. This makes the system

locally asymptotically stable at the equilibrium solution w* by virtue of Liapunov's


first method (see Gill et al., 1981; Dickinson, 1991). Thus, w* is a stable equilibrium of Equation (4.2.9). In

fact, the positive definite Hessian implies that w* is a minimum of , and therefore the gradient system

converges globally and asymptotically to w*, its only minimum from any initial
state. Thus, the trajectory w(t) of the stochastic system in Equation (4.2.8) is expected to approach and then

fluctuate about the state .

From Equation (4.2.4), the underlying instantaneous criterion function J is given by

(4.2.12)

which may be minimized by maximizing the correlation zy subject to the regularization term .
Here, the regularization term is needed in order to keep the solution bounded.

LMS Learning

For r(w, x, z) = z - wTx (the output error due to input x) and = 0, Equation (4.2.2) leads to the stochastic
equation

(4.2.13)

In this case, the average learning equation becomes

(4.2.14)

with equilibria satisfying

or

(4.2.15)

Let C denote the positive semidefinite autocorrelation matrix , defined in Equation (3.3.4), and

. If we have , then is the equilibrium state. Note that w* approaches the


minimum SSE solution in the limit of a large training set, and that this analysis is identical to the analysis of
the -LMS rule in Chapter 3. Let us now check the stability of w*. The Hessian matrix is

(4.2.16)

which is positive definite if 0. Therefore, w* = C-1P is the only (asymptotically) stable solution for
Equation (4.2.14), and the stochastic dynamics in Equation (4.2.13) are expected to approach this solution.

Finally, note that with = 0 Equation (4.2.4) leads to


or

(4.2.17)

which is the instantaneous SSE (or MSE) criterion function.

Hebbian Learning

Here, upon setting r(w, x, z) = y = wTx, Equation (4.2.2) gives the Hebbian rule with decay

(4.2.18)

whose average is

(4.2.19)

Setting in Equation (4.2.19) leads to the equilibria

(4.2.20)

So if C happens to have as an eigenvalue then w* will be the eigenvector of C corresponding to . In

general, though, will not be an eigenvalue of C, so Equation (4.2.19) will have only one equilibrium at

w* = 0. This equilibrium solution is asymptotically stable if is greater than the largest eigenvalue of C
since this makes the Hessian

(4.2.21)

positive definite. Now, employing Equation (4.2.4) we get the instantaneous criterion function minimized
by the Hebbian rule in Equation (4.2.18):
(4.2.22)

The regularization term is not adequate here to stabilize the Hebbian rule at a solution which
maximizes y2. However, other more appropriate regularization terms can insure stability, as we will see in
the next section.

4.3 Characterization of Additional Learning Rules

Equation (4.2.5) [or (4.2.6)] is a powerful tool which can be used in the characterization of the average
behavior of stochastic learning equations. We will employ it in this section in order to characterize some
unsupervised learning rules which were considered in Chapter 3. The following analysis is made easier if

one assumes that w and x are uncorrelated, and that is averaged with respect to x (denoted by

), with w replaced by its mean . This assumption leads to the "approximate" average learning
equation

(4.3.1)

The above approximation of the average learning equation is valid when the learning equation contains
strongly mixing random processes (processes for which the "past" and the "future" are asymptotically
independent) with the mixing rate high compared to the rate of change of the solution process; i.e., it can be
assumed that the weights are uncorrelated with the patterns x and with themselves. Taking the expected
(average) value of a stochastic equation one obtains a deterministic equation whose solution approximates

asymptotically the behavior of the original system as described by the stochastic equation (here, = (t) =
is normally assumed). Roughly, the higher the mixing rate the better the approximation in Equation (4.3.1)
(Kushner and Clark, 1978). We shall frequently employ Equation (4.3.1) in the remainder of this chapter.

A more rigorous characterization of stochastic learning requires the more advanced theory of stochastic
differential equations and will not be considered here [see Kushner (1977) and Ljung (1978)]. Rather, we
may proceed with a deterministic analysis using the "average versions" of the stochastic equations. It may
be shown that a necessary condition for the stochastic learning rule to converge (in the mean-square sense)
is that the average version of the learning rule must converge. In addition, and under certain assumptions,
the exact solution of a stochastic equation is guaranteed to "stay close," in a probabilistic sense, to the

solution of the associated average equation. It has been shown (Geman, 1979) that under strong

mixing conditions (and some additional assumptions), . This result


implies that if sufficiently small learning rates are used, the behavior of a stochastic learning equation may
be well approximated, in a mean-square sense, by the deterministic dynamics of its corresponding average
equation. Oja (1983) pointed out that the convergence of constrained gradient descent- (or ascent-) based
stochastic learning equations (the type of equations considered in this chapter) can be studied with
averaging techniques; i.e., the asymptotically stable equilibria of the average equation are the possible
limits of the stochastic equation. Several examples of applying the averaging technique to the
characterization of learning rules can be found in Kohonen (1989).
Before proceeding with further analysis of learning rules, we make the following important observations
regarding the nature of the learning parameter in the stochastic learning equation (Heskes and Kappen,
1991). When a neural network interacts with a fixed unchanging (stationary) environment, the aim of the
learning algorithm is to adjust the weights of the network in order to produce an optimal response; i.e., an
optimal representation of the environment. To produce such an optimal and static representation, we require
the learning parameter, which controls the amount of learning, to eventually approach zero. Otherwise,
fluctuations in the representation will persist, due to the stochastic nature of the learning equation, and
asymptotic convergence to optimal representation is never achieved. For a large class of stochastic
algorithms, asymptotic convergence can be guaranteed (with high probability) by using the learning

parameter ( Ljung, 1977; Kushner and Clark, 1978).

On the other hand, consider biological neural nets. Human beings, of course, are able to continually learn
throughout their entire lifetime. In fact, human learning is able to proceed on two different time scales;
humans learn with age (very large time scale adaptation/learning) and are also capable of discovering
regularities and are attentive for details (short time scale learning). This constant tendency to learn accounts
for the adaptability of natural neural systems to a changing environment. Therefore, it is clear that the
learning processes in biological neural networks does not allow for asymptotically vanishing learning
parameters.

In order for artificial neural networks to be capable of adapting to a changing (nonstationary) environment,
the learning parameter must take a constant nonzero value. The larger the learning parameter, the faster the
response of the network to the changing environment. On the other hand, a large learning parameter has a
negative effect on the accuracy of the network's representation of the environment at a given time; a large
gives rise to large fluctuations around the desired optimal representation. In practice, though, one might be
willing to trade some degree of fluctuation about the optimal representation (solution) for adaptability to a
nonstationary process. Similar ideas have been proposed in connection with stochastic adaptive linear
filtering. Here, an adaptive algorithm with a constant step size is used because it has the advantage of a
limited memory, which enables it to track time fluctuations in the incoming data. These ideas date back to
Wiener (1956) in connection with his work on linear prediction theory.

4.3.1 Simple Hebbian Learning

We have already analyzed one version of the Hebbian learning rule in the previous section. However, here
we consider the most simple form of Hebbian learning which is given by Equation (4.2.18) with = 0;
namely,

(4.3.2)

The above equation is a continuous-time version of the unsupervised Hebbian learning rule introduced in
Chapter 3. Employing Equation (4.3.1), we get the approximate average learning equation

(4.3.3)

In Equation (4.3.3) and in the remainder of this chapter, the subscript x in is dropped in order to
simplify notation. Now, the average gradient of J in Equation (4.3.3) may be written as

(4.3.4)

from which we may determine the average instantaneous criterion function


(4.3.5)

Note that is minimized by maximizing the unit's output variance. Again, C = is the
autocorrelation matrix, which is positive semidefinite having orthonormal eigenvectors c(i) with
corresponding eigenvalues . That is, Cc(i) = ic(i) for i = 1, 2, ..., n.
The dynamics of Equation (4.3.3) are unstable. To see this, we first find the equilibrium points by setting

= 0, giving C = 0 or w* = 0. w* is unstable because the Hessian (in an average sense)

= = -C is non-positive for all . Therefore, Equation (4.3.3) is unstable and results

in . Note, however, that the direction of is not random; it will tend to point in the

direction of c(1), since if one assumes a fixed weight vector magnitude, is minimized when is
parallel to the eigenvector with the largest corresponding eigenvalue.

In the following, we will characterize other versions of the Hebbian learning rule some of which were
introduced in Chapter 3. These rules are well-behaved, and hence solve the divergence problem
encountered with simple Hebbian learning. For simplifying mathematical notation and terminology, the

following sections will use J, J, and H to designate , , and , respectively. We will refer to

as simply the criterion function, to as the gradient of J, and to as the Hessian of J. Also,
the quantity w in the following average equations should be interpreted as the state of the average learning
equation.

4.3.2 Improved Hebbian Learning

Consider the criterion function

(4.3.6)

It is a well established property of quadratic forms that if w is constrained to the surface of the unit

hypersphere, then Equation (4.3.6) is minimized when w = c(1) with (e.g. see Johnson and

Wichern, 1988). Also, for any real symmetric n n matrix A, the Rayleigh quotient satisfies

where 1 and n are the largest and smallest eigenvalues of A, respectively. Let us start
from the above criterion and derive an average learning equation. Employing Equation (4.3.1), we get

(4.3.7)

which can be shown to be the average version of the nonlinear stochastic learning rule
(4.3.8)

If we heuristically set for the two terms in the above equation, Equation (4.3.8) reduces to
the continuous version of Oja's rule [refer to Equation (3.3.6)]. Let us continue with the characterization of
(4.3.8) and defer the analysis of Oja's rule to Section 4.3.3. At equilibrium, Equation (4.3.7) gives

(4.3.9)

Hence, the equilibria of Equation (4.3.7) are the solutions of Equation (4.3.9) given by w* = c(i),

i = 1, 2, ..., n. Here, J takes its smallest value of at w* = c(1). This can be easily verified by direct
substitution in Equation (4.3.6).

Next, consider the Hessian of J at w* = c(i) (assuming, without loss of generality, = 1) and multiply it by c(j)
, namely, H(c(i)) c(j). It can be shown that this quantity is given by (see Problem 4.3.1. For a reference on
matrix differential calculus, the reader is referred to the book by Magnus and Neudecker, 1988):

(4.3.10)

This equation implies that H(w*) has the same eigenvectors as C but with different eigenvalues. H(w*) is
positive semi-definite only when w* = c(1). Thus, by following the dynamics of Equation (4.3.7), w will
eventually point in the direction of c(1) (since none of the other directions c(i) is stable). Although the
direction of w will eventually stabilize, it is entirely possible for ||w|| to approach infinity, and Equation
(4.3.7) will appear never to converge. We may artificially constrain ||w|| to finite values by normalizing w

after each update of Equation (4.3.8). Alternatively, we may set the two terms in Equation (4.3.8)
equal to 1. This latter case is considered next.

4.3.3 Oja's Rule

Oja's rule was defined by Equation (3.3.6). Its continuous-time version is given by the nonlinear stochastic
equation

(4.3.11)

The corresponding average learning equation is thus ( Oja, 1982; 1989)

(4.3.12)

which has its equilibria at w satisfying


(4.3.13)

The solutions of Equation (4.3.13) are w* = c(i), i = 1, 2, ..., n. All of these equilibria are unstable except for

. This can be seen by noting that the Hessian

(4.3.14)

is positive definite only at w* = c(1) (or -c(1)). Note that Equation (4.3.14) is derived starting from

, with given in Equation (4.3.12). Although J is not known, the positive definiteness of H
can be seen from

(4.3.15)

and by noting that the eigenvalues of C satisfy (we assume 1 2). Therefore, Oja's
rule is equivalent to a stable version of the Hebbian rule given in Equation (4.3.8). A formal derivation of
Oja's rule is explored in Problem 4.3.7.

A single unit employing Oja's rule (Oja's unit) is equivalent to a linear matched filter. To see this, assume

that for all x, , where is a fixed vector (without loss of generality let ) and v is a
vector of symmetrically distributed zero-mean noise with uncorrelated components having variance 2. Then

. The largest eigenvalue of C is 1 + 2 and the corresponding

eigenvector is . Oja's unit then becomes a matched filter for the data, since
in Equation (4.3.12). Here, the unit responds maximally to the data mean. Further characterization of Oja's
rule can be found in Xu (1993).

Oja's rule is interesting because it results in a local learning rule which is biologically plausible. The locality
property is seen by considering the component weight adaptation rule of Equation (4.3.11), namely

(4.3.16)

and by noting that the change in the ith weight is not an "explicit" function of any other weight except the
ith weight itself. Of course, does depend on w via y = wTx. However, this dependence does not violate
the concept of locality.

It is also interesting to note that Oja's rule is similar to Hebbian learning with weight decay as in Equation
(4.2.18). For Oja's rule, though, the growth in ||w|| is controlled by a "forgetting" or weight decay term,
-y2w, which has nonlinear gain; the forgetting becomes stronger with stronger response, thus preventing ||
w|| from diverging.
Example 4.3.1: This example shows typical simulation results comparing the evolution of the weight
vector w according to the stochastic Oja rule in Equation (4.3.11) and its corresponding average rule in
Equation (4.3.12).

Consider a training set {x} of forty 15-dimensional column vectors with independent random components
generated by a normal distribution N(0, 1). In the following simulations, the training vectors autocorrelation

matrix has the following set of eigenvalues: {2.561, 2.254, 2.081, 1.786, 1.358, 1.252,
1.121, 0.963, 0.745, 0.633, 0.500, 0.460, 0.357, 0.288, 0.238}.

During training, one of the forty vectors is selected at random and is used in the learning rule to compute
the next weight vector. Discretized versions of Equations (4.3.11) and (4.3.12) are used where is
replaced by wk+1 - wk. A learning rate = 0.005 is used. This is equivalent to integrating these equations
with Euler's method (e.g., see Gerald, 1978) using a time step t = 0.005 and = 1. The initial weight vector is
set equal to one of the training vectors. Figures 4.3.1 (a) and (b) show the evolution of the cosine of the
angle between wk and c(1) and the evolution of the norm of wk, respectively. The solid line corresponds to
the stochastic rule and the dashed line corresponds to the average rule.

(a)

(b)

Figure 4.3.1. (a) Evolution of the cosine of the angle between the weight vector wk and the principal
eigenvector of the autocorrelation matrix C for the stochastic Oja rule (solid line) and for the average Oja
rule (dashed line). (b) Evolution of the magnitude of the weight vector. The training set consists of forty 15-
dimensional real-valued vectors whose components are independently generated according to a normal
distribution N(0, 1). The presentation order of the training vectors is random during training.

The above simulation is repeated, but with a fixed presentation order of the training set. Results are shown
in Figures 4.3.2 (a) and (b). Note that the results for the average learning equation (dashed line) are
identical in both simulations since they are not affected by the order of presentation of input vectors. These
simulations agree with the theoretical results on the appropriateness of using the average learning equation
to approximate the limiting behavior of its corresponding stochastic learning equation. Note that a

monotonically decreasing learning rate (say proportional to or with k 1) can be used to force
the convergence of the direction of wk in the first simulation. It is also interesting to note that better
approximations are possible when the training vectors are presented in a fixed deterministic order (or in a
random order but with each vector guaranteed to be selected once every training cycle of m = 40
presentations). Here, a sufficiently small, constant learning rate is sufficient for making the average
dynamics approximate, in a practical sense, the stochastic dynamics for all time.
(a)

(b)

Figure 4.3.2. (a) Evolution of the cosine of the angle between the weight vector wk and the principal
eigenvector of the autocorrelation matrix C for the stochastic Oja rule (solid line) and for the average Oja
rule (dashed line). (b) Evolution of the magnitude of the weight vector. The training set consists of forty 15-
dimensional real-valued vectors whose components are independently generated according to a normal
distribution N(0, 1). The presentation order of the training vectors is fixed.

4.3.4 Yuille et al. Rule

The continuous-time version of the Yuille et al. (1989) learning rule is

(4.3.17)

and the corresponding average learning equation is

(4.3.18)

with equilibria at

(4.3.19)

From Equation (4.3.18), the gradient of J is

(4.3.20)

which leads to the Hessian

(4.3.21)

Note that one could have also computed H directly from the known criterion function
(4.3.22)

Now, evaluating H(wi*)wj* we get

(4.3.23)

which implies that the wj* are eigenvectors of H(wi*) with eigenvalues i - j for i j and 2i for i = j. Therefore,

H(wi*) is positive definite if and only if i > j, for i j. In this case, w* = are the only stable
equilibria, and the dynamics of the stochastic equation are expected to approach w*.

4.3.5 Hassoun's Rule

In the following, the unsupervised Hebbian-type learning rule

(4.3.24)

with ||w|| 0 is analyzed. Another way for stabilizing Equation (4.3.2) is to start from a criterion function

that explicitly penalizes the divergence of w. For example, if we desire to be satisfied, we may
utilize J given by

(4.3.25)

with > 0. It can be easily shown that steepest gradient descent on the above criterion function leads to the
learning equation

(4.3.26)

Equation (4.3.26) is the average learning equation for the stochastic rule of Equation (4.3.24). Its equilibria
are solutions of the equation

(4.3.27)

Thus, it can be seen that the solution vectors of Equation (4.3.27), denoted by , must be parallel to one
of the eigenvectors of C, say c(i),
(4.3.28)

where i is the ith eigenvalue of C. From Equation (4.3.28) it can be seen that the norm of the ith equilibrium
state is given by

(4.3.29)

which requires > i for all i. Note that if >> i, then approaches one for all equilibrium points. Thus, the
equilibria of Equation (4.3.26) approach unity norm eigenvectors of the correlation matrix C when is large.

Next, we investigate the stability of these equilibria. Starting from the Hessian

(4.3.30)

we have

(4.3.31)

which implies that is positive definite if and only if is parallel to c(1) and > 1. Therefore, the

only stable equilibria of Equation (4.3.26) are which approach c(1) for >> 1. Like the
Yuille et al. rule, this rule preserves the information about the size of 1 [1 can be computed from Equation
(4.3.29)].

For the discrete-time version, it is interesting to note that for the case >> 1, and = 1, Equation (4.3.24)
reduces to

(4.3.32)

which is very similar to the discrete-time simple Hebbian learning rule with weight vector normalization of
Equation (3.3.5) and expressed here as
(4.3.33)

Note that the weight vector in in Equation (4.3.33) need not be normalized to prevent
divergence.

In practice, discrete-time versions of the stochastic learning rules in Equations (4.3.11), (4.3.17), and
(4.3.24) are used where is replaced by wk+1 - wk and w(t) by wk. Here, the stability of these discrete-time
stochastic dynamical systems critically depends on the value of the learning rate . Although the stability
analysis is difficult, one can resort to the discrete-time versions of the average learning equations
corresponding to these rules and derive conditions on for asymptotic convergence (in the mean) in the
neighborhood of equilibrium states w*. Such analysis is explored in Problems 4.3.8 and 4.3.9.

In concluding this section, another interpretation of the regularization effects on the stabilization of
Hebbian-type rules is presented. In the stochastic learning Equations (4.2.18), (4.3.11), (4.3.17), and

(4.3.24), regularization appears as weight decay terms -w, -y2w, -||w||2w, and - w, respectively.
Therefore, one may think of weight decay as a way of stabilizing unstable learning rules. However, one
should carefully design the gain coefficient in the weight decay term for proper performance. For example,
it has been shown earlier that a simple positive constant gain in Equation (4.2.18) does not stabilize Hebb's

rule. On the other hand, the nonlinear dynamic gains y2, ||w||2, and lead to stability. Note
that the weight decay gain in Oja's rule utilizes more information, in the form of y2, than the Yuille et al.
rule or Hassoun's rule. The regularization in these latter rules is only a function of the current weight vector
magnitude.

4.4 Principal Component Analysis (PCA)

The PCA network of Section 3.3.5, employing Sanger's rule, is analyzed in this section. Recalling Sanger's
rule [from Equation (3.3.19)] and writing it in vector form for the continuous-time case, we get

(4.4.1)

with i = 1, 2, ..., m, where m is the number of units in the PCA network. We will assume, without any loss
of generality, that m = 2. This leads to the following set of coupled learning equations for the two units:

(4.4.2)

and
(4.4.3)

Equation (4.4.2) is Oja's rule. It is independent of unit 2 and thus converges to , the principal
eigenvector of the autocorrelation matrix of the input data (assuming zero mean input vectors). Equation
(4.4.3) is Oja's rule with the added inhibitory term . Next, we will assume a sequential operation
of the two-unit net where unit 1 is allowed to fully converge before evolving unit 2. This mode of operation
is permissible since unit 1 is independent of unit 2.

With the sequential update assumption, Equation (4.4.3) becomes

(4.4.4)

For clarity, we will drop the subscript on w. Now, the average learning equation for unit 2 is given by

(4.4.5)

which has equilibria satisfying

(4.4.6)

Hence, and with i = 2, 3, ..., n are solutions. Note that the point is not an
equilibrium. The Hessian is given by

(4.4.7)

Since is not positive definite, the equilibrium w* = 0 is not stable. For the
remaining equilibria we have the Hessian matrix

(4.4.8)

which is positive definite only at , assuming 2 3. Thus, Equation (4.4.5) converges


asymptotically to the unique stable vector which is the eigenvector of C with the second largest
eigenvalue 2. Similarly, for a network with m interacting units according to Equation (4.4.1), the ith unit
(i = 1, 2, ..., m) will extract the ith eigenvector of C.

The unit-by-unit description presented here helps simplify the explanation of the PCA net behavior. In fact,
the weight vectors wi approach their final values simultaneously, not one at a time. Though, the above
analysis still applies, asymptotically, to the end points. Note that the simultaneous evolution of the wi is
advantageous since it leads to faster learning than if the units are trained one at a time.
4.5 Theory of Reinforcement Learning

Recall the simplified stochastic reinforcement learning rule of Equation (3.2.5). The continuous-time
version of this rule is given by

(4.5.1)

from which the average learning equation is given as

(4.5.2)

Now, employing Equation (4.2.6), we may think of Equation (4.5.2) as implementing a gradient search on

an average instantaneous criterion function, , whose gradient is given by

(4.5.3)

In Equations (4.5.1) through (4.5.3), we have and the output y is generated by a stochastic
unit according to the probability function P(y| w, x), given by

(4.5.4)

with the expected output

(4.5.5)

as in Section 3.1.6. Next, it is shown that Equation (4.5.3) is proportional to the gradient of the expected

reinforcement signal (Williams, 1987; Hertz et al., 1991).

First, we express the expected (average) reinforcement signal, , for the kth input vector with respect
to all possible outputs y as

(4.5.6)

and then evaluate its gradient with respect to w. The gradient of is given by
(4.5.
7)

which follows from Equation (4.5.4). We also have

(4.5.8)

and (4.5.9)

which can be used in Equation (4.5.7) to give

(4.5.10)

If we now take the gradient of Equation (4.5.6) and use (4.5.10), we arrive at

(4.5.11)

which can also be written as

(4.5.12)

Finally, by averaging Equation (4.5.12) over all inputs xk, we get

(4.5.13)

where now the averages are across all patterns and all outputs. Note that is proportional to in

Equation (4.5.3) and has an opposite sign. Thus, Equation (4.5.2) can be written in terms of as

(4.5.14)

which implies that the average weight vector converges to a local maximum of ; i.e., Equation (4.5.1)
converges, on average, to a solution that locally maximizes the average reinforcement signal.

Extensions of these results to a wider class of reinforcement algorithms can be found in Williams (1987).
The characterization of the associative reward-penalty algorithm in Equation (3.2.1) is more difficult since
it does not necessarily maximize . However, the above analysis should give some insight into the
behavior of simple reinforcement learning.

4.6 Theory of Simple Competitive Learning

In this section we attempt to characterize simple competitive learning. Two approaches are described: one
deterministic and the other statistical.

4.6.1 Deterministic Analysis

Consider a single layer of linear units, where each unit uses the simple continuous-time competitive rule
[based on Equations (3.4.3) and (3.4.5)]

(4.6.1)

Also, consider the criterion function (Ritter and Schulten, 1988a)

(4.6.2)

where is the weight vector of the winner unit upon the presentation of the input vector xk. Here, all
vectors xk are assumed to be equally probable. In general, a probability of occurrence of xk, P(xk), should be
inserted inside the summation in Equation (4.6.2). An alternative way of expressing J in Equation (4.6.2) is
through the use of a "cluster membership matrix" M defined for each unit i = 1, 2, ..., n by

(4.6.3)

Here, is a dynamically evolving function of k and i which specifies whether or not unit i is the winning
unit upon the presentation of input xk. The cluster membership matrix allows us to express the criterion
function J as

(4.6.4)

Now, performing gradient descent on J in Equation (4.6.4) yields

(4.6.5)
which is the batch mode version of the learning rule in Equation (4.6.1). It was noted by Hertz et al. (1991)
that the above batch mode competitive learning rule corresponds to the k-means clustering algorithm when
a finite training set is used. The local rule of Equation (4.6.1) may have an advantage over the batch mode
rule in Equation (4.6.5) since stochastic noise due to the random presentation order of the input patterns
may kick the solution out of "poor" minima towards minima which are more optimal. However, only in the
case of "sufficiently sparse" input data points can one prove stability and convergence theorems for the
stochastic (incremental) competitive learning rule (Grossberg, 1976). The data points are sparse enough if

there exists a set of clusters so that the minimum overlap within a cluster exceeds the maximum
overlap between that cluster and any other cluster. In practice, a damped learning rate k is used (e.g.,

, where and 0 is a positive constant) in order to stop weight evolution at one of the
local solutions. Here, the relatively large initial learning rate allows for wide exploration during the initial
phase of learning.

Criterion functions, other than the one in Equation (4.6.4), may be employed which incorporate some
interesting heuristics into the competitive rule for enhancing convergence speed or for altering the

underlying "similarity measure" implemented by the learning rule. For an example, we may replace by

in Equation (4.6.4). This causes the winning weight vector to be repelled by input vectors in
other clusters, while being attracted by its own cluster, which enhances convergence. Another example is to
employ a different similarity measure (norm) in J such as the Minkowski-r norm of Equation (3.1.68),
which has the ability to reduce the effects of outlier data points by proper choice of the exponent r. Other
criterion functions may also be employed; the reader is referred to Bachmann et al. (1987) for yet another
suitable criterion function.

4.6.2 Stochastic Analysis

The following is an analysis of simple competitive learning based on the stochastic approximation
technique introduced in Section 4.3. Consider the following normalized discrete-time competitive rule (von
der Malsberg, 1973 ; Rumelhart and Zipser, 1985):

(4.6.6)

where, again, the setting is a single layer network of linear units. Here, and typically,

. Also, the weight normalization is assumed for all units. It can be easily verified
that Equation (4.6.6) preserves this weight normalization at any iteration (this was explored in Problem
3.4.1).

Let P(xk) be the probability that input xk is presented on any trial. Then, the average learning equation may
be expressed as

(4.6.7)
where is the conditional probability that unit i wins when input xk is presented. Now, using
Equation (4.6.6) in Equation (4.6.7) we get

(4.6.8)

which implies that at equilibrium

(4.6.9)

Therefore, the jth component of vector wi is given as

(4.6.10)

We now make the following observations. First, note that the denominator of Equation (4.6.10) is the
probability that unit i wins averaged over all stimulus patterns. Note further that

is the probability that (active) and unit i is a winner. Thus, assuming

that all patterns have the same number of active bits (i.e., for all k), we may employ Bayes' rule

and write Equation (4.6.10) as

(4.6.11)

which states that at equilibrium, wij is expected to be proportional to the conditional probability that the jth
bit of input xk is active given unit i is a winner.

Next, upon the presentation of a new pattern [assuming the equilibrium weight values given
by Equation (4.6.9)], unit i will have a weighted sum (activity) of
(4.6.12)

or

(4.6.13)

where represents the overlap between stimulus and the kth training pattern xk. Thus, at
equilibrium, a unit responds most strongly to patterns that overlap other patterns to which the unit responds
and most weakly to patterns that are far from patterns to which it responds. Note that we may express the

conditional probability according to the winner-take-all mechanism

(4.6.14)

Because of the dependency of on wi, there are many solutions which satisfy the equilibrium
relation given in Equation (4.6.9).

Equation (4.6.6) leads the search to one of many stable equilibrium states satisfying Equations (4.6.9) and

(4.6.14). In such a state, the ith unit activations become stable (fluctuate minimally), and therefore,

becomes stable. A sequence of stimuli might, however, be presented in such a way as to


introduce relatively large fluctuations in the wi's. In this case, the system might move to a new equilibrium

state which is, generally, more stable in the sense that becomes unlikely to change values for a
very long period of time. Rumelhart and Zipser (1985) gave a measure of the stability of an equilibrium
state as the average amount by which the output of the winning units is greater than the response of all of
the other units averaged over all patterns and all clusters. This stability measure is given by

(4.6.15)

where the averaging is taken over all xk, and i* is the index of winning units. Note that Equation (4.6.15)
can be written as
(4.6.16)

The larger the value of J, the more stable the system is expected to be. Maximizing J can also be viewed as
maximizing the overlap among patterns within a group (cluster) while minimizing the overlap among
patterns between groups; this is exactly what is required for the clustering of unlabeled data. In geometric
terms, J is maximized when the weight vectors point toward maximally compact stimulus (input) regions
that are as distant as possible from other such regions.

4.7 Theory of Feature Mapping

The characterization of topological feature preserving maps has received special attention in the literature
(Kohonen, 1982b; Cottrell and Fort, 1986; Ritter and Schulten, 1986 and 1988b; Tolat, 1990; Heskes and
Kappen, 1993a; Lo et al., 1993; Kohonen, 1993b). In particular, Takeuchi and Amari (1979) and Amari
(1980 and 1983) have extensively studied a continuous-time dynamical version of this map to investigate
the topological relation between the self-organized map and the input space governed by the density p(x),
the resolution and stability of the map, and convergence speed. The characterization of a general feature
map is difficult, and much of the analysis has been done under simplifying assumptions.

In the following, a one-dimensional version of the self-organizing feature map of Kohonen is characterized
following the approach of Ritter and Schulten (1986). A continuous dynamical version of Kohonen's map is
also described and analyzed.

4.7.1 Characterization of Kohonen's Feature Map

Consider the criterion function J(w) defined by (Ritter and Schulten, 1988a)

(4.7.1)

where i* is the label of the winner unit upon the presentation of stimulus (input) xk and is the
neighborhood function that was introduced in Section 3.5. It can be seen that Equation (4.7.1) is an
extension of the competitive learning criterion function of Equation (4.6.2). Performing gradient descent on
(4.7.1) yields

(4.7.2)

which is just the batch mode version of Kohonen's self-organizing rule in Equation (3.5.1). Thus, Kohonen's
rule is a stochastic gradient descent search and leads, on average and for small , to a local minimum of J in
Equation (4.7.1). These minima are given as solutions to

(4.7.3)
This equation is not easy to solve; it depends on the choice of and the distribution p(x). Actually, we desire
the global minimum of the criterion function J. Local minima of J are topological defects like kinks in one-
dimensional maps and twists in two-dimensional maps [Kohonen, 1989; Geszti, 1990].

The analysis of feature maps becomes more tractable if one replaces Equation (4.7.3) with a continuous
version that assumes a continuum of units and where the distribution p(x) appears explicitly, namely

(4.7.4)

where r* = r*(x) is the coordinate vector of the winning unit upon the presentation of input x.

An implicit partial differential equation for w can be derived from Equation (4.7.4) [Ritter and Schulten,
1986]. However, for the case of two- or higher-dimensional maps, no explicit solutions exist for w(r) given
an arbitrary p(x). On the other hand, solutions of Equation (4.7.4) are relatively easy to find for the one-
dimensional map with scalar r and a given input distribution p(x). Here, the equilibrium w* satisfies (Ritter
and Schulten, 1986)

(4.7.5)

which, in turn, satisfies the implicit differential equation corresponding to Equation (4.7.4), given by
(assuming a sharply peaked symmetric ):

(4.7.6)

In Equations (4.7.5) and (4.7.6), the term p(w) is given by p(x)|x=w. Equation (4.7.5) shows that the density

of the units in w space is proportional to around point r. This verifies the density preserving

feature of the map. Ideally, however, we would have for zero distortion. Therefore, a self-
organizing feature map tends to under sample high probability regions and over sample low probability
ones.

Finally, one may obtain the equilibria w* by solving Equation (4.7.6). The local stability of some of these
equilibria is ensured (with a probability approaching 1) if the learning coefficient = (t) is sufficiently small,
positive, and decays according to the following necessary and sufficient conditions (Ritter and Schulten,
1988b)

(4.7.7a)

and

(4.7.7b)
In particular, the decay law with 0 < 1 ensures convergence. For laws with > 1 or exponential
decay laws, Equation (4.7.7a) is not fulfilled, and some residual error remains even in the limit t . It can
also be shown that during convergence, the map first becomes untangled and fairly even, and then moves
into a refinement phase where it adapts to the details of p(x). Occasionally, the "untangling" phase can slow
convergence, because some types of tangle (e.g., kinks and twists) can take a long time to untangle. Geszti
(1990) suggested the use of a strongly asymmetric neighborhood function in order to speed up learning by
breaking the symmetry effects responsible for slow untangling of kinks and twists.

4.7.2 Self-Organizing Neural Fields

Consider a continuum of units arranged as an infinite two-dimensional array (neural field). Each point (unit)
on this array may be represented by a position vector r and has an associated potential u(r). The output of
the unit at r is assumed to be a nonlinear function of its potential y(r) = f [u(r)], where f is either a
monotonically non-decreasing positive saturating activation function or a step function. Associated with
each unit r is a set of input weights w(r) and another set of lateral weights (r, r') = (r - r'). This lateral
weight distribution is assumed to be of the on-center off-surround type as is shown in Figure 4.7.1 for the
one-dimensional case. Here, a unit at position r makes excitatory connections with all of its neighbors

located within a distance from r, and makes inhibitory connections with all other units.

Figure 4.7.1. One-dimensional plot of a neural field's lateral weight distribution.

The dynamics of the neural field potential u(r, t) are given by

(4.7.8)

where

(4.7.9)

and h is a constant bias field. In Equation (4.7.8), it is assumed that the potential u(r, t) decays with time
constant to the resting potential h in the absence of any stimulation. Also, it is assumed that this potential
increases in proportion to the total stimuli s(r, x) which is the sum of the lateral stimuli
and the input stimuli wTx due to the input signal x Rn. A conceptual diagram for the
above neural field is shown in Figure 4.7.2.

Figure 4.7.2. Self-organizing neural field.

In Equation (4.7.8), the rates of change of and w, if any, are assumed to be much slower than that of the
neural field potential. The input signal (pattern) x is a random time sequence, and it is assumed that a
pattern x is chosen according to probability density p(x). Also, we assume that inputs are applied to the
neural field for a time duration which is longer than the time constant of the neural field potential. On the
other hand, the duration of stimulus x is assumed to be much shorter than the time constant ' of the weight
w. Thus, the potential distribution u(r, t) can be considered to change in a quasi-equilibrium manner
denoted by u(r, x).

An initial excitation pattern applied to the neural field changes according to the dynamics given in Equation
(4.7.8), and eventually converges to one of the equilibrium solutions. Stable equilibrium solutions are the
potential fields u(r) which the neural field can retain persistently under a constant input x. The equilibria of
Equation (4.7.8) must satisfy or

(4.7.10)

where s(r, u*) is the total stimuli at equilibrium. When the lateral connections distribution (r - r') is strongly
off-surround inhibitory, given any x, only a local excitation pattern is aroused as a stable equilibrium which
satisfies Equation (4.7.10) (Amari, 1990). Here, a local excitation is a pattern where the excitation is
concentrated on units in a small local region; i.e., u*(r) is positive only for a small neighborhood centered at
a maximally excited unit r0. Thus u*(r, x) represents a mapping from the input space onto the neural field.

Let us now look at the dynamics of the self-organizing process. We start by assuming a particular update
rule for the input weights of the neural field. One biologically plausible update rule is the Hebbian rule:

(4.7.11)
where is the neural field's equilibrium output activity due to input x. In Equation
(4.7.11), we use the earlier assumption ' . Next, we assume strong mixing in Equation (4.7.11) which allows
us to write an expression for the average learning equation (absorbing ' in and in ) as

(4.7.12)

where the averaging is over all possible x. The equilibria of Equation (4.7.12) are given by

(4.7.13)

If we now transpose Equation (4.7.12) and multiply it by an arbitrary input vector x we arrive at an equation
for the change in input stimuli

(4.7.14)

The vector inner product xT x' represents the similarity of two input signals x and x' and hence the topology
of the signal space (Takeuchi and Amari, 1979). Note how Equation (4.7.14) relates the topology of the
input stimulus set {x} with that of the neural field.

On the other hand, if one assumes a learning rule where a unit r updates its input weight vector in
proportion to the correlation of its equilibrium potential u*(r, x) and the difference x - w(r), one arrives at
the average differential equation

(4.7.15)

This learning rule is equivalent to the averaged continuum version of Kohonen's self-organizing feature
map in Equation (4.7.2), if one views the potential distribution u*(r, x) as the weighting neighborhood
function . Here, self-organization will emerge if the dynamics of the potential field evolve such that the
quasi-stable equilibrium potential u* starts positive for all r, then monotonically and slowly shrinks in
diameter for positive time. This may be accomplished through proper control of the bias field h, as
described below.

In general, it is difficult to solve Equation (4.7.8) and (4.7.12) [or (4.7.15)]. However, some properties of
the formation of feature maps are revealed from these equations for the special, but revealing, case of a one-
dimensional neural field (Takeuchi and Amari, 1979; Amari, 1980 and 1983). The dynamics of the potential
field in Equation (4.7.8) for a one-dimensional neural field was analyzed in detail by Amari (1977b) and
Kishimoto and Amari (1979) for a step activation function and a continuous monotonically non-decreasing
activation function, respectively. It was shown that with (r - r') as shown in Figure 4.7.1 and f(u) = step(u),
there exist stable equilibrium solutions u*(r) for the x = 0 case. The 0-solution potential field, u*(r) 0, and
the -solution field, u*(r) 0, are among these stable solutions. The 0-solution is stable if and only if h < 0. On

the other hand, the -solution is stable if and only if h > -2 A(), where with A(a) as
defined below. Local excitations (also known as a-solutions) where u*(r) is positive only over a finite
interval [a1, a2] of the neural field are also possible. An a-solution exists if and only if h + A(a) = 0. Here,
A(a) is the definite integral defined by

(4.7.16)
and plotted in Figure 4.7.3. Amari (see also Krekelberg and Kok, 1993) also showed that a single a-solution
can exist for the case of a non-zero input stimulus, and that the corresponding active region of the neural
field is centered at the unit r receiving the maximum input. Furthermore, the width, a, of this active region
is a monotonically decreasing function of the bias field h. Thus, one may exploit the fact that the field
potential/neighborhood function u*(r, x) is controlled by the bias field h in order to control the convergence
of the self-organizing process. Here, the uniform bias field h is started at a positive value h > -2A() and is
slowly decreased towards negative values. This, in turn, causes u* to start at the -solution, then gradually
move through a-solutions with decreasing width a until, ultimately, u* becomes the 0-solution. For further
analysis of the self-organizing process in a neural field, the reader is referred to Zhang (1991).

Figure 4.7.3. A plot of A(a) of Equation (4.7.16).

Kohonen (1993a and 1993b) proposed a self-organizing map model for which he gives physiological
justification. The model is similar to Amari's self-organizing neural field except that it uses a discrete two-
dimensional array of units. The model assumes sharp self-on off-surround lateral interconnections so that
the neural activity of the map is stabilized where the unit receiving the maximum excitation becomes active
and all other units are inactive. Kohonen's model employs unit potential dynamics similar to those of
Equation (4.7.8). On the other hand, Kohonen uses a learning equation more complex than those in
Equations (4.7.12) and (4.7.15). This equation is given for the ith unit weight vector by the pseudo-Hebbian
learning rule

(4.7.17)

where is a positive constant. The term in Equation (4.7.17) models a natural "transient"
neighborhood function. It represents a weighted-sum of the output activities yl of nearby units, which
describes the strength of the diffuse chemical effect of cell l on cell i; hil is a function of the distance of
these units. This weighted-sum of output activities replaces the output activity of the same cell, yi, in Hebb's

learning rule. On the other hand, the term acts as a stabilizing term which models
"forgetting" effects, or disturbance due to adjacent synapses. Typically, forgetting effects in wi are
proportional to the weight wi itself. In addition, if the disturbance caused by synaptic site r is mediated
through the post synaptic potential, the forgetting effect must further be proportional to wirxr. This

phenomenon is modeled by in the forgetting term. Here, the summation is taken over a subset of
the synapses of unit i that are located near the jth synapse wij, and approximately act as one collectively
interacting set. The major difference between the learning rules in Equation (4.7.17) and (4.7.12) is that, in
the former, the "neighborhood" function is determined by a "transient" activity due to a diffusive chemical
effect of nearby cell potentials, whereas it is determined by a stable region of neural field potential in the
latter.

Under the assumption that the index r ranges over all components of the input signal x and regarding

as a positive scalar independent of w and x, the vector form of Equation (4.7.18) takes the
Riccati differential equation form

(4.7.18)

where = > 0. Now, multiplying both sides of Equation (4.7.18) by 2wT leads to the differential equation

(4.7.19)

Thus, for arbitrary x with wTx > 0, Equation (4.7.19) converges to . On the other hand, the solution for the
direction of w* cannot be determined in closed form from the deterministic differential Equation (4.7.18).
However, a solution for the expected value of w may be found if Equation (4.7.18) is treated as a stochastic
differential equation with strong mixing in accordance to the discussion of Section 4.3. Taking the expected
value of both sides of Equation (4.7.18) and solving for its equilibrium points (by setting ) gives

(4.7.20)

Furthermore, this equilibrium point can be shown to be stable (Kohonen, 1989).

From the above analysis, it can be concluded that the synaptic weight vector w is automatically normalized
to the length , independent of the input signal x, and that w rotates such that its average direction is aligned
with the mean of x. This is the expected result of a self-organizing map when a uniform nondecreasing
neighborhood function is used. In general, though, the neighborhood term is nonuniform and time varying,
which makes the analysis of Equation (4.7.17) much more difficult.

4.8 Generalization

In Chapter 2, we analyzed the capabilities of some neural network architectures for realizing arbitrary
mappings. We have found that a feedforward neural net with a single hidden layer having an arbitrary
number of sigmoidal activation units is capable of approximating any mapping (or continuous multivariate
function) to within any desired degree of accuracy. The results of Chapter 2 on the computational
capabilities of layered neural networks say nothing about the synthesis/learning procedure needed to set the
interconnection weights of these networks. What remains to be seen is whether such networks are capable
of finding the necessary weight configuration for a given mapping by employing a suitable learning
algorithm.

Later chapters in this book address the question of learning in specific neural network architectures by
extending appropriate learning rules covered in Chapter 3. In the remainder of this chapter, we address two
important issues related to learning in neural networks: generalization and complexity. The following
discussion is general in nature, and thus it holds for a wide range of neural network paradigms.
Generalization is measured by the ability of a trained network to generate the correct output for a new
randomly chosen input belonging to the same probability density p(x) governing the training set. In this
section, two cases are considered: Average generalization and worst case generalization. This section also
considers the generalization capabilities of stochastic neural networks.

4.8.1 Generalization Capabilities of Deterministic Networks

One important performance measure of trainable neural networks is the size of the training set needed to
bound their generalization error below some specified number. Schwartz et al. (1990) gave a theoretical
framework for calculating the average probability of correct generalization for a neural net trained with a
training set of size m. Here, the averaging is over all possible networks (of fixed architecture) consistent
with the training set, and the only assumptions about the network's architecture is that it is deterministic
(employs deterministic units) and that it is a universal architecture (or faithful model) of the class of
functions/mappings being learned. It is assumed that these functions are of the form f : Rn {0, 1}, but the
ideas can be extended to multiple and/or continuous-valued outputs as well.

The following analysis is based on the theoretical framework of Schwartz et al. (1990), and it also draws on
some clarifications given by Hertz et al. (1991). The main result of this analysis is rather surprising; one can
calculate the average probability of correct generalization for any training set of size m if one knows a
certain function that can (in theory) be calculated before training begins. However, it should be kept in
mind that this result is only meaningful when interpreted in an average sense, and does not necessarily
represent the typical situation encountered in a specific training scheme.

Consider a class of networks with a certain fixed architecture specified by the number of layers, the number
of units within each layer, and the interconnectivity pattern between layers. Let us define the quantity V0 as

(4.8.1)

which stands for the total "volume" of the weight space, where w represents the weights of an arbitrary
network, and is some a priori weight probability density function. Thus, each network is represented as a
point w in weight space which implements a function fw(x). We may now partition the weight space into a
set of disjoint regions, one for each function fw that this class of networks can implement. The volume of
the region of weight space that implements a particular function f(x) is given by

(4.8.2)

where

Each time an example {xk, fd(xk)} of a desired function fd is presented and is successfully learned
(supervised learning is assumed), the weight vector w is modified so that it enters the region of weight
space that is compatible with the presented example. If m examples are learned, then the volume of this
region is given by

(4.8.3)

where

The region Vm represents the total volume of weight space which realizes the desired function fd as well as
all other functions f that agree with fd on the desired training set. Thus, if a new input is presented to the
trained network it will be ambiguous with respect to a number of functions represented by Vm (recall the
discussion on ambiguity for the simple case of a single threshold gate in Section 1.5). As the number of
learned examples m is increased, the expected ambiguity decreases.
Next, the volume of weight space consistent with both the training examples and a "particular" function f is
given by

(4.8.4)

where Equation (4.8.2) was used. Note that fw in I has been replaced by f and that the product term factors
outside of the integral. Now, assuming independent input vectors xk generated randomly from distribution
p(x), the factors I(f, xk) in Equation (4.8.4) are independent. Thus, averaging Vm(f ) over all xk gives

(4.8.5)

The quantity g(f ) takes on values between 0 and 1. It is referred to as the generalization ability of f ; i.e.,
g(f ) may be viewed as the probability that for an input x randomly chosen from p(x). As an example, for a
completely specified n-input Boolean function fd, g(f ) is given by (assuming that all 2n inputs are equally
likely)

where dH(f, fd) is the number of bits by which f and fd differ.

Let us define the probability Pm(f ) that a particular function f can be implemented after training on m
examples of fd. This probability is equal to the average fraction of the remaining weight space that f
occupies:

(4.8.6)

The approximation in Equation (4.8.6) is based on the assumption that Vm does not vary much with the
particular training sequence; i.e., Vm for each probable sequence. This assumption is expected to be valid
as long as m is small compared to the total number of possible input combinations.

Good generalization requires that Pm(f ) be small. Let us use Equation (4.8.6) to compute the distribution of
generalization ability g(f ) across all possible f 's after successful training with m examples:

(4.8.7)

Note that an exact m(g) can be derived by dividing the right-hand side of Equation (4.8.7) by . The above
result is interesting since it allows us to compute, before learning, the distribution of generalization ability
after training with m examples. The form of Equation (4.8.7) shows that the distribution m(g) tends to get
concentrated at higher and higher values of g as more and more examples are learned. Thus, during
learning, although the allowed volume of weight (or function) space shrinks, the remaining regions tend to
have large generalization ability.

Another useful measure of generalization is the average generalization ability G(m) given by

(4.8.8)

which is the ratio between the m + 1 and the mth moments of 0(g) and can be computed if 0(g) is given or
estimated. G(m) gives the entire "learning curve"; i.e., it gives the average expected success rate as a
function of m. Equation (4.8.8) allows us to predict the number of examples, m, necessary to train the
network to a desired average generalization performance. We may also define the average prediction error
as 1 - G(m). The asymptotic behavior (m ) of the average prediction error is determined by the form of the
initial distribution 0(g) near g = 1. If a finite gap between g = 1 and the next highest g for which 0(g) is
nonzero exists, then the prediction error decays to 1 exponentially as . If, on the other hand, there is no such
gap in 0(g), then the prediction error decays as . These two behaviors of the learning curve have also been
verified through numerical experiments. The nature of the gap in the distribution of generalizations near the
region of perfect generalization (g = 1) is not completely understood. These gaps have been detected in
experiments involving the learning of binary mappings (Cohn and Tesauro, 1991 and 1992). It is speculated
that such a gap could be due to the dynamic effects of the learning process where the learning algorithm
may, for some reason, avoid the observed near-perfect solutions. Another possibility is that the gap is
inherent in the nature of the binary mappings themselves.

The above approach, though theoretically interesting, is of little practical use for estimating m, since it
requires the knowledge of the distribution 0(g), whose estimation is computationally expensive. It also gives
results that are only valid in an average sense, and does not necessarily represent the typical situation
encountered in a specific training scheme.

Next, we summarize a result that tells us about the generalization ability of a deterministic feedforward
neural network in the "worst" case. Here, also, the case of learning a binary-valued output function
f :Rn {0, 1} is treated. Consider a set of m labeled training example pairs (x, y) selected randomly from
some arbitrary probability distribution p(x, y), with x Rn and y = f(x) {0, 1}. Also, consider a single hidden
layer feedforward neural net with k LTG's and d weights that has been trained on the m examples so that at
least a fraction 1 - , where , of the examples are correctly classified. Then, with a probability approaching 1,
this network will correctly classify the fraction 1 - of future random test examples drawn from p(x, y), as
long as (Baum and Haussler, 1989)

(4.8.9)

Ignoring the log term, we may write Equation (4.8.9), to a first order approximation, as

(4.8.10)

which requires m d for good generalization. It is interesting to note that this is the same condition for
"good" generalization (low ambiguity) for a single LTG derived by Cover (1965) (refer to Section 1.5) and
obtained empirically by Widrow (1987). Therefore, one may note that in the limit of large m the
architecture of the network is not important in determining the worst case generalization behavior; what
matters is the ratio of the number of degrees of freedom (weights) to the training set size. On the other hand,
none of the above theories may hold for the case of a small training set. In this later case, the size and
architecture of the network and the learning scheme all play a role in determining generalization quality
(see the next chapter for more details). It should also be noted that the architecture of the net can play an
important role in determining the speed of convergence of a given class of learning methods, as discussed
later in Section 4.9.

Similar results for worst case generalization are reported in Blumer et al. (1989). A more general learning
curve based on statistical physics and VC dimension theories (Vapnik and Chervonenkis, 1971) which
applies to a general class of networks can be found in Haussler et al. (1992). For generalization results with
noisy target signals the reader is referred to Amari et al. (1992).

4.8.2 Generalization in Stochastic Networks

This section deals with the asymptotic learning behavior of a general stochastic learning dichotomy
machine (classifier). We desire a relation between the generalization error and the training error in terms of
the number of free parameters of the machine (machine complexity) and the size of the training set. The
results in this section are based on the work of Amari and Murata (1993). Consider a parametric family of
stochastic machines where a machine is specified by a d-dimensional parameter vector w such that the
probability of output y, given an input x, is specified by P(y | x, w). As an example, one may assume the
machine to be a stochastic multilayer neural network parameterized by a weight vector w Rd, which, for a
given input x Rn, emits a binary output with probability

(4.8.11)

and

where
(4.8.12)

and g(x, w) may be considered as a smooth deterministic function (e.g., superposition of multivariate
sigmoid functions typically employed in layered neural nets). Thus, in this example, the stochastic nature of
the machine is determined by its stochastic output unit.

Assume that there exists a true machine that can be faithfully represented by one of the above family of
stochastic machines with parameter w0. The true machine receives inputs xk, k = 1, 2, ..., m, which are
randomly generated according to a fixed but unknown probability distribution p(x), and emits yk. The
maximum likelihood estimator (refer to Section 3.1.5 for definition) characterized by the machine will be
our first candidate machine. This machine predicts y for a given x with probability P(y | x, ). An entropic
loss function is used to evaluate the generalization of a trained machine for a new example (xm+1, ym+1).

Let gen be the average predictive entropy (also known as the average entropic loss) of a trained machine
parameterized by for a new example (xm+1, ym+1):

(4.8.13)

Similarly, we define train as the average entropic loss over the training examples used to obtain

(4.8.14)

Finally, let H0 be the average entropic error of the true machine

(4.8.15)

Amari and Murata proved the following theorem for training and generalization error.

Theorem 4.8.1 (Amari and Murata, 1993): The asymptotic learning curve for the entropic training error is
given by

(4.8.16)

and for the entropic generalization error by

(4.8.17)

The proof of Theorem 4.8.1 uses standard techniques of asymptotic statistics and is omitted here. (The
reader is referred to the original paper by Amari and Murata (1993) for such proof).

In general, H0 is unknown and it can be eliminated from Equation (4.8.17) by substituting its value from
Equation (4.8.16). This gives

(4.8.18)

which shows that for a faithful stochastic machine and in the limit of m d, the generalization error
approaches that of the trained machine on m examples, which from Equation (4.8.16) is the classification
error H0 of the true machine.

Again, the particular network architecture is of no importance here as long as it allows for a faithful
realization of the true machine and m >> d. It is interesting to note that this result is similar to the worst case
learning curve for deterministic machines [Equation (4.8.10)], when the training error is zero. The result is
also in agreement with Cover's result on classifier ambiguity in Equation (1.5.3), where the term in
Equation (4.8.18) may be viewed as the probability of ambiguous response on the m + 1 input. In fact,
Amari (1993) also proved that the average predictive entropy gen(m) for a general deterministic dichotomy
machine (e.g., feedforward neural net classifier) converges to 0 as in the limit m .
4.9 Complexity of Learning

This section deals with the computational complexity of learning: How much computation is required to
learn (exactly or approximately to some "acceptable" degree) an arbitrary mapping in a multilayer neural
network? In other words, is there an algorithm that is computationally "efficient" for training layered neural
networks? Here, we will assume that the desired learning algorithm is a supervised one, which implies that
the training set is labeled. Also, we will assume that the neural network has an arbitrary architecture but
with no feedback connections.

Learning in artificial neural networks is hard. More precisely, loading of an arbitrary mapping onto a
"faithful" neural network architecture requires exponential time irrespective of the learning algorithm used
(batch or adaptive). Judd (1987, 1990) showed that the learning problem in neural networks is NP-
complete, even for approximate learning; i.e., in the worst case, we will not be able to do much better than
just randomly exhausting all combinations of weight settings to see if one happens to work. Therefore, as
the problem size increases (that is, as the input pattern dimension or the number of input patterns increases)
the training time scales up exponentially in the size of the problem. Moreover, it has been shown (Blum and
Rivest, 1989) that training a simple n-input, three-unit two layer net of LTG's can be NP-complete in the
worst-case when learning a given set of examples. Consider the class of functions defined on a collection of
m arbitrary points in Rn. It has been shown that the problem of whether there exist two hyperplanes that
separate them is NP-complete (Megiddo, 1986); i.e., the training of a net with two-hidden n-input LTG's
and a single output LTG on examples of such functions is exponential in time, in the worst case, even if a
solution exists. Blum and Rivest (1992) extend this result to the case of Boolean functions. They also
showed that learning Boolean functions with a two-layer feedforward network of k-hidden units (k bounded
by some polynomial in n) and one output unit (which computes the AND function) is NP-complete.

However, these theoretical results do not rule out the possibility of finding a polynomial-time algorithm for
the training of certain classes of problems onto certain carefully selected architectures. Blum and Rivest
(1992) gave an example of two networks trained on the same task, such that training the first is NP-
complete but the second can be trained in polynomial time. Also, the class of linearly separable mappings
can be trained in polynomial time if single layer LTG nets are employed (only a single unit is needed if the
mapping has a single output). This is easy to prove since one can use linear programming (Karmarkar,
1984) to compute the weights and thresholds of such nets in polynomial time. One can also use this fact and
construct layered networks that have polynomial learning time complexity for certain classes of non-
linearly separable mappings. This is illustrated next.

Consider a set F of non-linearly separable functions or which has the following two properties: (1) There
exists at least one layered neural net architecture for which loading m training pairs {x, yd} of f(x) is NP-
complete; (2) there exists a fixed dimensionality expansion process D that maps points x in Rn to points z in
Rd, such that d is bounded by some polynomial in n [e.g., ], and that the m training examples {z, yd}
representing f (x) in the expanded space Rd are linearly separable. This set F is not empty; Blum and Rivest
(1992) gave examples of functions in F. Figure 4.9.1 depicts a layered architecture which can realize any
function in F. Here, a fixed preprocessing layer, labeled D in Figure 4.9.1, implements the above
dimensionality expansion process. The output node is a d-input LTG. It can be easily shown that the
learning complexity of this network for functions in F is polynomial. This can be seen by noting that the
training of the trainable part of this network (the output LTG) has polynomial complexity for m linearly
separable examples in Rd and that as n increases, d remains polynomial in n.

Figure 4.9.1. A layered architecture consisting of a fixed preprocessing layer D followed by an adaptive
LTG.

The efficiency of learning linearly separable classification tasks in a single threshold gate should not be
surprising. We may recall from Chapter One that the average amount of necessary and sufficient
information for the characterization of the set of separating surfaces for a random, separable dichotomy of
m points grows slowly with m and asymptotically approaches 2d (twice the number of degrees of freedom
of the class of separating surfaces). This implies that for a random set of linear inequalities in d unknowns,
the expected number of extreme inequalities which are necessary and sufficient to cover the whole set,
tends to 2d as the number of consistent inequalities tends to infinity, thus bounding the (expected) necessary
number of training examples for learning algorithms in separable problems. Moreover, this limit of 2d
consistent inequalities is within the learning capacity of a single d-input LTG.

Another intuitive reason that the network in Figure 4.9.1 is easier to train than a fully adaptive two layer
feedforward net is that we are giving it predefined nonlinearities. The former net does not have to start from
scratch, but instead is given more powerful building blocks to work with. However, there is a trade off. By
using the network in Figure 4.9.1, we gain in a worst-case computational sense, but lose in that the number
of weights increases from n to O(n2) or higher. This increase in the number of weights implies that the
number of training examples must increase so that the network can meaningfully generalize on new
examples (recall the results of the previous section).

The problem of NP-complete learning in multilayer neural networks may be attributed to the use of fixed
network resources (Baum, 1989. Learning an arbitrary mapping can be achieved in polynomial time for a
network that allocates new computational units as more patterns are learned. Mukhopadhyay et al. (1993)
gave a polynomial-time training algorithm for the general class of classification problems (defined by
mappings of the form ) based on clustering and linear programming models. This algorithm simultaneously
designs and trains an appropriate network for a given classification task. The basic idea of this method is to
cover class regions with a minimal number of dynamically allocated hyperquadratic volumes (e.g.,
hyperspheres) of varying size. The resulting network has a layered structure consisting of a simple fixed
preprocessing layer, a hidden layer of LTG's, and an output layer of logical OR gates. This and other
efficiently trainable nets are considered in detail in Section 6.3.

4.10 Summary

Learning in artificial neural networks is viewed as a search for parameters (weights) which optimize a
predefined criterion function. A general learning equation is presented which implements a stochastic
steepest gradient descent search on a general criterion function with (or without) a regularization term. This
learning equation serves to unify a wide variety of learning rules, regardless of whether they are supervised,
unsupervised, or reinforcement rules.

The learning equation is a first order stochastic differential equation. This allows us to employ an averaging
technique to study its equilibria and its convergence characteristics. The use of averaging, under reasonable
assumptions, allows us to approximate the stochastic learning equation by a deterministic first-order
dynamical system. In most cases, a well-defined criterion function exists which allows us to treat the
deterministic systems as a gradient system. This enables us to exploit the global stability property of
gradient systems, and determine the nature of the solutions evolved by the average learning equation. These
stable solutions are then taken to be the possible solutions sought by the associated stochastic learning
equation. The averaging technique is employed in characterizing several basic rules for supervised,
unsupervised, and reinforcement learning. For unsupervised learning, we present analysis and insights into
the theory of Hebbian, competitive, and self-organizing learning. In particular, self-organizing neural fields
are introduced and analyzed.

The chapter also looks at some important results on generalization of learning in general feedforward neural
architectures. The asymptotic behavior of generalization error is derived for deterministic and stochastic
networks. Generalization in the average and in the worst-case are considered. The main result here is that
the number of training examples necessary for "good" generalization on test samples must far exceed the
number of adjustable parameters of the network used.
Finally, the issue of complexity of learning in neural networks is addressed. It is found that learning an
arbitrary mapping in a layered neural network is NP-complete in the worst-case. However, it is also found
that efficient (polynomial-time) learning is possible if appropriate network architectures and corresponding
learning algorithms are found for certain classes of mappings/learning tasks.

Problems

4.1.1 Identify the "regularization" term, if any, in the learning rules listed in Table 3.1 of Chapter 3.

4.2.1 Characterize the LMS learning rule with weight decay by analyzing its corresponding average
differential equation as in Section 4.2. Find the underlying instantaneous criterion J and its expected value .

4.2.2 Employ Liapunov's first method (see footnote #4) to study the stability of the nonlinear system

for the equilibrium point x* = [0 1]T. Is this an asymptotically stable point? Why?

4.2.3 Study the stability of the lossless pendulum system with the nonlinear dynamics

about the equilibrium points x* = [0 0]T and x* = [ 0]T. Here, measures the angle of the pendulum with
respect to its vertical rest position, g is gravitational acceleration, and L is the length of the pendulum.

4.2.4 Liapunov's first method for studying the stability of nonlinear dynamical systems (see footnote #4) is
equivalent to studying the asymptotic stability of a linearized version of these systems about an equilibrium
point. Linearize the second order nonlinear system in Problem 4.2.2 about the equilibrium point x* = [0 1]T
and write the system equations in the form . Show that the system matrix A is identical to the Jacobian
matrix f '(x) at x = [0 1]T of the original nonlinear system and thus both matrices have the same
eigenvalues. (Note that the asymptotic stability of a linear system requires the eigenvalues of its system
matrix A to have strictly negative real parts).

4.2.5 The linearization method for studying the stability of a nonlinear system at a given equilibrium point
may fail when the linearized system is stable, but not asymptotically; that is, if all eigenvalues of the system
matrix A have nonpositive real parts, and if the real part of one or more eigenvalues of A has zero real part.
Demonstrate this fact for the nonlinear system

at the equilibrium point x* = [0, 0]T. (Hint: Simulate the above dynamical system for the three cases: a < 0,
a = 0, and a > 0, for an initial condition x(0) of your choice).

* 4.3.1 Show that the Hessian of J in Equation (4.3.6) is given by

Also, show that

* 4.3.2 Study the stability of the equilibrium point(s) of the dynamical system , where J(w) is given by
Note that the above system corresponds to the average learning equation of Linsker's learning rule (see
Section 3.3.4) without weight clipping.

4.3.3 Verify the Hessian matrices given in Section 4.3 for Oja's, Yuille et al., and Hassoun's learning rules,
given in Equations (4.3.14), (4.3.21), and (4.3.30), respectively.

4.3.4 Show that the average learning equation for Hassoun's rule is given by Equation (4.3.26).

4.3.5 Study the stability of the equilibrium points of the stochastic differential equation/learning rule
(Riedel and Schild, 1992)

where is a positive integer. ( Note: this rule is equivalent to Yuille et al. rule for = 2 ).

† 4.3.6 Study, via numerical simulations, the stability of the learning rule ( Riedel and Schild, 1992 )

where . Assume a training set {x} of twenty vectors in R10 whose components are generated randomly and
independently according to the normal distribution N(0, 1). Is there a relation between the stable point(s) w*
(if such point(s) exist) and the eigenvectors of the input data autocorrelation matrix? Is this learning rule
local? Why?

4.3.7 Show that the discrete-time version of Oja's rule is a good approximation of the normalized Hebbian
rule in Equation (4.3.33) for small values. Hint: Start by showing that

* 4.3.8 Consider the general learning rule described by the following discrete-time gradient system:

(1)

with > 0. Assume that w* is an equilibrium point for this dynamical system.

a. Show that in the neighborhood of w*, the gradient J(w) can be approximated as

(2)

where H(w*) is the Hessian of J evaluated at w*.

b. Show that the gradient in Equation (2) is exact when J(w) is quadratic; i.e., where Q is a symmetric
matrix, and b is a vector of constants.

c. Show that the linearized gradient system at w* is given by

(3)

where I is the identity matrix.

d. What are the conditions on H(w*) and for local asymptotic stability of w* in Equation (3)?

e. Use the above results to show that, in an average sense, the -LMS rule in Equation (3.1.35) converges
asymptotically to the equilibrium solution in Equation (3.1.45) if 0 < < , where max is the largest eigenvalue
of the autocorrelation matrix C in Equation (3.3.4). (Hint: Start with the gradient system in Equation (1) and
use Equation (3.1.44) for J). Now, show that is a sufficient condition for convergence of the -LMS rule.
(Hint: The trace of a matrix (the sum of all diagonal elements) is equal to the sum of its eigenvalues).

*4.3.9 Use the results from the previous problem and Equation (4.3.14) to find the range of values for for
which the discrete-time Oja's rule is stable (in an average sense). Repeat for Hassoun's rule which has the
Hessian matrix given by Equation (4.3.30), and give a justification for the choice = 1 which has led to
Equation (4.3.32).

† 4.3.10 Consider a training set of 40 15-dimensional vectors whose components are independently
generated according to a normal distribution N(0, 1). Employ the stochastic discrete-time version of Oja's,
Yuille et al., and Hassoun's rules (replace by wk+1 - wk in Equations (4.3.11), (4.3.17), and (4.3.24),
respectively) to extract the principal eigenvector of this training set. Use a fixed presentation order of the
training vectors. Compare the convergence behavior of the three learning rules by generating plots similar
to those in Figures 4.3.1 (a) and (b). Use = 0.005, = 100, and a random initial w. Repeat using the
corresponding discrete-time average learning equations with the same learning parameters and initial
weight vector as before and compare the two sets of simulations.

4.3.11 This problem illustrates an alternative approach to the one of Section 4.3 for proving the stability of
equilibrium points of an average learning equation (Kohonen, 1989). Consider a stochastic first order
differential equation of the form where x(t) is governed by a stationary stochastic process. Furthermore,
assume that the vectors x are statistically independent from each other and that strong mixing exists. Let z
be an arbitrary constant vector having the same dimension as x and w. Now, the "averaged" trajectories of
w(t) are obtained by taking the expected value of ,

a. Show that

where is the angle between vectors z and w.

b. Let z = c(i), the ith unity-norm eigenvector of the autocorrelation matrix C = <xxT>. Show that, the
average rate of change of the cosine of the angle between w and c(i) for Oja's rule [Equation (4.3.12)] is
given by

where i is the eigenvalue associated with c(i). Note that the c(i)'s are the equilibria of Oja's rule.

c. Use the result in part b to show that if then w(t) will converge to the solution w* = c(1), where c(1) is the
eigenvector with the largest eigenvalue 1 (Hint: Recall the bounds on the Rayleigh quotient given in Section
4.3.2).

*4.3.12 Use the technique outlined in Problem 4.3.11 to study the convergence properties (in the average)
of the following stochastic learning rules which employ a generalized forgetting law:

a. .

b. .

Assume that > 0 and y = wTx, and that g(y) is an arbitrary scalar function of y such that exists. Note that
Equation (4.7.18) and Oja's rule are special cases of the learning rules in a and b, respectively.

4.4.1 Show that Equation (4.4.7) is the Hessian for the criterion function implied by Equation (4.4.5).
4.6.1 Study (qualitatively) the competitive learning behavior which minimizes the criterion function

where is as defined in Equation (4.6.3). Can you think of a physical system (for some integer value of N)
which is governed by this "energy" function J?

4.6.2 Derive a stochastic competitive learning rule whose corresponding average learning equation
maximizes the criterion function in Equation (4.6.16).

* 4.7.1 Show that for the one-dimensional feature map, Equation (4.7.6) can be derived from Equation
(4.7.4). See Hertz et al. (1991) for hints.

4.7.2 Show that Equation (4.7.5) satisfies Equation (4.7.6).

4.7.3 Solve Equation (4.7.5) for p(x) x, where R. For which input distribution p(x) do we have a zero-
distortion feature map?

4.7.4 Prove the stability of the equilibrium point in Equation (4.7.20). (Hint: Employ the technique outlined
in Problem 4.3.11).
5. Adaptive Multilayer Neural Networks I
5.0 Introduction

This chapter extends the gradient descent-based delta rule of Chapter 3 to multilayer feedforward neural
networks. The resulting learning rule is commonly known as error back propagation (or backprop), and it is
one of the most frequently used learning rules in many applications of artificial neural networks.

The backprop learning rule is central to much current work on learning in artificial neural networks. In fact,
the development of backprop is one of the main reasons for the renewed interest in artificial neural
networks. Backprop provides a computationally efficient method for changing the weights in a feedforward
network, with differentiable activation function units, to learn a training set of input-output examples.
Backprop-trained multilayer neural nets have been applied successfully to solve some difficult and diverse
problems such as pattern classification, function approximation, nonlinear system modeling, time-series
prediction, and image compression and reconstruction. For these reasons, we devote most of this chapter to
study backprop, its variations, and its extensions.

Backpropagation is a gradient descent search algorithm which may suffer from slow convergence to local
minima. In this chapter, several methods for improving backprop's convergence speed and avoidance of
local minima are presented. Whenever possible, theoretical justification is given for these methods. A
version of backprop based on an enhanced criterion function with global search capability is described,
which, when properly tuned, allows for relatively fast convergence to good solutions. Several significant
applications of backprop trained multilayer neural networks are described. These applications include the
conversion of English text into speech, mapping hand gestures to speech, recognition of hand-written zip
codes, autonomous vehicle navigation, medical diagnosis, and image compression.

The last part of this chapter deals with extensions of backprop to more general neural network architectures.
These include multilayer feedforward nets whose inputs are generated by a tapped delay-line circuit and
fully recurrent neural networks. These adaptive networks are capable of extending the applicability of
artificial neural networks to nonlinear dynamical system modeling and temporal pattern association.

5.1 Learning Rule for Multilayer Feedforward Neural Networks

Consider the two-layer feedforward architecture shown in Figure 5.1.1. This network receives a set of scalar
signals {x0, x1, ... , xn}, where x0 is a bias signal equal to 1. This set of signals constitutes an input vector x,
x Rn+1. The layer receiving the input signal is called the hidden layer. Figure 5.1.1 shows a hidden layer
having J units. The output of the hidden layer is a (J+1)-dimensional real-valued vector z, z = [z0, z1, ...,
zJ]T. Again, z0 = 1 represents a bias input and can be thought of as being generated by a "dummy" unit (with
index zero) whose output z0 is clamped at 1. The vector z supplies the input for the output layer of L units.
The output layer generates an L-dimensional vector y in response to the input x which, when the network is
fully trained, should be identical (or very close) to a "desired" output vector d associated with x.
Figure 5.1.1. A two layer fully interconnected feedforward neural network architecture. For clarity, only
selected connections are drawn.

The activation function fh of the hidden units is assumed to be a differentiable nonlinear function (typically,

fh is the logistic function defined by , or a hyperbolic tangent function


fh(net) = tanh ( net), with values for and close to unity). Each unit of the output layer is assumed to have the
same activation function, denoted fo; the functional form of fo is determined by the desired output
signal/pattern representation or the type of application. For example, if the desired output is real-valued (as
in some function approximation applications), then a linear activation fo (net) = net may be used. On the
other hand, if the network implements a pattern classifier with binary outputs, then a saturating nonlinearity
similar to fh may be used for fo. In this case, the components of the desired output vector d must be chosen
within the range of fo. It is important to note that if fh is linear, then one can always collapse the net in
Figure 5.1.1 to a single layer net and thus lose the universal approximation/mapping capabilities discussed
in Chapter 2. Finally, we denote by wji the weight of the jth hidden unit associated with the input signal xi.
Similarly, wlj is the weight of the lth output unit associated with the hidden signal zj.

Next, consider a set of m input/output pairs {xk, dk}, where dk is an L-dimensional vector representing the
desired network output upon the presentation of xk. The objective here is to adaptively adjust the
J(n + 1) + L(J +& nbsp;1) weights of this network such that the underlying function/mapping represented
by the training set is approximated or learned. Since the learning here is supervised, i.e., target outputs are
available, we may define an error function to measure the degree of approximation for any given setting of
the network's weights. A commonly used error function is the SSE measure, but this is by no means the
only possibility, and later in this chapter, several other error functions will be discussed. Once a suitable
error function is formulated, learning can be viewed (as was done in Chapters 3 and 4) as an optimization
process. That is, the error function serves as a criterion function, and the learning algorithm seeks to
minimize the criterion function over the space of possible weight settings. For instance, if a differentiable
criterion function is used, gradient descent on such a function will naturally lead to a learning rule. This
idea has been invented independently by Bryson and Ho (1969), Amari (1967; 1968), Werbos (1974), and
Parker (1985). Next, we illustrate the above idea by deriving a supervised learning rule for adjusting the
weights wji and wlj such that the following error function is minimized (in a local sense) over the training
set (Rumelhart et al., 1986b):

(5.1.1)

Here, w represents the set of all weights in the network. Note that Equation (5.1.1) is the " instantaneous"
SSE criterion of Equation (3.1.32) generalized for a multiple output network.
5.1.1 Error Backpropagation Learning Rule

Since the targets for the output units are explicitly specified, one can directly use the delta rule, derived in
Section 3.1.3 for updating the wlj weights. That is,

(5.1.2)

where is the weighted sum for the lth output unit, is the derivative of fo with respect

to net, and and represent the updated (new) and current weight values, respectively. The zj's are
computed by propagating the input vector x through the hidden layer according to:

(5.1.3)

The learning rule for the hidden layer weights wji is not as obvious as that for the output layer since we do
not have available a set of target values (desired outputs) for hidden units. However, one may derive a
learning rule for hidden units by attempting to minimize the output layer error. This amounts to propagating
the output errors (dl − yl) back through the output layer towards the hidden units in an attempt to estimate
"dynamic" targets for these units. Such a learning rule is termed error back-propagation or the backprop
learning rule and may be viewed as an extension of the delta rule (Equation 5.1.2) used for updating the
output layer. To complete the derivation of backprop for the hidden layer weights, and similar to the above
derivation for the output layer weights, we perform gradient descent on the criterion function in Equation
(5.1.1), but this time, the gradient is calculated with respect to the hidden weights:

(5.1.4)

where the partial derivative is to be evaluated at the current weight values. Using the chain rule for
differentiation, one may express the partial derivative in Equation (5.1.4) as

(5.1.5)

with

(5.1.6)
(5.1.7)

and

Now, upon substituting Equations (5.1.6) through (5.1.8) into Equation (5.1.5) and using Equation (5.1.4),
we arrive at the desired learning rule:

(5.1.9)

By comparing Equation (5.1.9) to (5.1.2), one can immediately define an "estimated target" dj for the jth
hidden unit implicitly in terms of the back propagated error signal dj−zj as follows:

(5.1.10)

It is usually possible to express the derivatives of the activation functions in Equations (5.1.2) and (5.1.9) in
terms of the activations themselves. For example, for the logistic activation function, we have

(5.1.11)

and for the hyperbolic tangent function, we have

(5.1.12)

The above learning equations may also be extended to feedforward nets with more than one hidden layer
and/or nets with connections that jump over one or more layers (see Problems 5.1.2 and 5.1.3). The
complete procedure for updating the weights in a feedforward neural net utilizing the above rules is
summarized below for the two layer architecture of Figure 5.1.1. We will refer to this learning procedure as
incremental backprop or just backprop:

1. Initialize all weights and refer to them as "current" weights and . (see Section 5.2.1 for details).

2. Set the learning rates o and h to small positive values (refer to Section 5.2.2 for additional details).
3. Select an input pattern xk from the training set (preferably at random) and propagate it through the
network, thus generating hidden and output unit activities based on the current weight settings.

4. Use the desired target dk associated with xk and employ Equation (5.1.2) to compute the output layer

weight changes .

5. Employ Equation (5.1.9) to compute the hidden layer weights changes . Normally, the current
weights are used in these computations. In general, enhanced error correction may be achieved if one

employs the updated output layer weights . However, this comes at the added cost
of recomputing yl and fo'(netl).

6. Update all weights according to and for the output and


hidden layers, respectively.

7. Test for convergence. This is done by checking some preselected function of the output errors to see if its

magnitude is below some preset threshold. If convergence is met, stop; otherwise, set and

and go to step 3. It should be noted that backprop may fail to find a solution which passes the
convergence test. In this case, one may try to reinitialize the search process, tune the learning parameters,
and/or use more hidden units.

The above procedure is based on "incremental" learning, which means that the weights are updated after
every presentation of an input pattern. Another alternative is to employ " batch" learning where weight
updating is performed only after all patterns (assuming a finite training set) have been presented. The batch
learning is formally stated by summing the right hand side of Equations (5.1.2) and (5.1.9) over all patterns
xk. This amounts to gradient descent on the criterion function

(5.1.13)

Even though batch updating moves the search point w in the direction of the true gradient at each update
step, the "approximate" incremental updating is more desirable for two reasons: (1) It requires less storage,
and (2) it makes the search path in the weight space stochastic (here, at each time step the input vector x is
drawn at random) which allows for a wider exploration of the search space and, potentially, leads to better
quality solutions. When backprop converges, it converges to a local minima of the criterion function
(McInerny et al., 1989). This fact is true of any gradient descent-based learning rule when the surface being
searched is nonconvex (Amari, 1990); i.e., it admits local minima. Using stochastic approximation theory,
Finnoff (1993; 1994) showed that for "very small" learning rates (approaching zero), incremental backprop
approaches batch backprop and produces essentially the same results. However, for small constant learning
rates there is a nonnegligible stochastic element in the training process which gives incremental backprop a
"quasiannealing" character in which the cumulative gradient is continuously perturbed, allowing the search
to escape local minima with small shallow basins of attraction. Thus, solutions generated by incremental
backprop are often practical ones. The local minima problem can be further eased by heuristically adding
random noise to the weights (von Lehman et al., 1988) or by adding noise to the input patterns (Sietsma and
Dow, 1988). In both cases, some noise reduction schedule should be employed to dynamically reduce the
added noise level towards zero as learning progresses.

Next, the incremental backprop learning procedure is applied to solve a two-dimensional, two-class pattern
classification problem. This problem should help give a good feel for what is learned by the hidden units in
a feedforward neural network, and how the various units work together to generate a desired solution.
Example 5.1.1: Consider the two-class problem shown in Figure 5.1.2. The points inside the shaded region
belong to class B and all other points are in class A. A three layer feedforward neural network with
backprop training is employed which is supposed to learn to distinguish between these two classes. The
network consists of an 8-unit first hidden layer, followed by a second hidden layer with 4 units, followed by
a 1-unit output layer. We will refer to such a network as having an 8-4-1 architecture. All units employ a
hyperbolic tangent activation function. The output unit should encode the class of each input vector; a
positive output indicates class B and a negative output indicates class A. Incremental backprop was used
with learning rates set to 0.1. The training set consists of 500 randomly chosen points, 250 from region A
and another 250 from region B. In this training set, points representing class B and class A were assigned
desired output (target) values of +1 and −1, respectively. Training was performed for several hundred cycles
over the training set.

Figure 5.1.2 Decision regions for the pattern classification problem in Example 5.1.1

Figure 5.1.3 shows geometrical plots of all unit responses upon testing the network with a new set of 1000
uniformly randomly generated points inside the [−1, +1]2 region. In generating each plot, a black dot was
placed at the exact coordinates of the test point (input) in the input space if and only if the corresponding
unit response is positive. The boundaries between the dotted and the white regions in the plots represent
approximate decision boundaries learned by the various units in the network. Figure 5.1.3 (a)-(h) represent
the decision boundaries learned by the eight units in the first hidden layer. Figure 5.1.3 (i)-(l) shows the
decision boundaries learned by the four units of the second hidden layer. Figure 5.1.3 (m) shows the
decision boundary realized by the output unit. Note the linear nature of the separating surface realized by
the first hidden layer units, from which complex nonlinear separating surfaces are realized by the second
hidden layer units and ultimately by the output layer unit. This example also illustrates how a single hidden
layer feedforward net (counting only the first two layers) is capable of realizing convex, concave, as well as
disjoint decision regions, as can be seen from Figure 5.1.3 (i)-(l). Here, we neglect the output unit and view
the remaining net as one with an 8-4 architecture.

The present problem can also be solved with smaller networks (fewer number of hidden units or even a
network with a single hidden layer). However, the training of such smaller networks with backprop may
become more difficult. A smaller network with a 5-3-1 architecture utilizing a variant backprop learning
procedure (Hassoun et al., 1990) is reported in Song (1992), which has a comparable separating surface to
the one in Figure 5.1.3 (m).
Figure 5.1.3. Separating surfaces generated by the various units in the 8-4-1 network of Example 5.1.1 (a)-
(h): Separating surfaces realized by the units in the first hidden layer; (i)-(l): Separating surface realized by
the units in the second hidden layer; and (m): Separating surface realized by the output unit.

Huang and Lippmann (1988) employed Monte Carlo simulations to investigate the capabilities of backprop
in learning complex decision regions (see Figure 2.3.3). They reported no significant performance
difference between two and three layer feedforward nets when forming complex decision regions using
backprop. They also demonstrated that backprop's convergence time is excessive for complex decision
regions and the performance of such trained classifiers is similar to that obtained with the k-nearest
neighbor classifier (Duda and Hart, 1973). Villiers and Barnard (1993) reported similar simulations but on
data sets which consisted of a "distribution of distributions" where a typical class is a set of clusters
(distributions) in the feature space, each of which can be more or less spread out and which might involve
some or all of the dimensions of the feature space; the distribution of distributions thus assigns a probability
to each distribution in the data set. It was found for networks of equal complexity (same number of
weights), that there is no significant difference between the quality of "best" solutions generated by two and
three layer backprop-trained feedforward networks; actually, the two layer nets demonstrated better
performance, on the average. As for the speed of convergence, three layer nets converged faster if the
number of units in the two hidden layers were roughly equal.

Gradient descent search may be eliminated all together in favor of a stochastic global search procedure that
guarantees convergence to a global solution with high probability; genetic algorithms and simulated
annealing are examples of such procedures and are considered in Chapter 8. However, the assured (in
probability) optimality of these global search procedures comes at the expense of slow convergence. Next, a
deterministic search procedure termed global descent is presented which helps backprop reach globally
optimal solutions.

5.1.2 Global Descent-Based Error Backpropagation

Here, we describe a learning method in which the gradient descent rule in batch backprop is replaced with a
"global descent" rule (Cetin et al. 1993a). This methodology is based on a global optimization scheme,
acronymed TRUST: terminal repeller unconstrained subenergy tunneling, which formulates optimization in
terms of the flow of a special deterministic dynamical system (Cetin et al., 1993b).

Global descent is a gradient descent on a special criterion function C(w, w*) given by

(
5.1.14)

where w*, with component values , is a fixed weight vector which can be a local minimum of E(w) or

an initial weight state w0, is the unit step function, is a shifting parameter (typically set to 2), and k is a
small positive constant. The first term in the right-hand side in Equation (5.1.14) is a monotonic
transformation of the original criterion function (e.g., SSE criterion may be used) which preserves all
critical points of E(w) and has the same relative ordering of the local and global minima of E(w). It also
flattens the portion of E(w) above E(w*) with minimal distortion elsewhere. On the other hand, the term

is a "repeller term" which gives rise to a convex surface with a unique minimum located

at . The overall effects of this energy transformation is schematically represented for a one-
dimensional criterion function in Figure 5.1.4.
Performing gradient descent on C(w, w*) leads to the "global descent" update rule

(5.1.15
)

The first term on the right-hand side of Equation (5.1.15) is a "subenergy gradient", while the second term
is a "non-Lipschitzian" terminal repeller (Zak, 1989). Upon replacing the gradient descent in Equation
(5.1.2) and (5.1.4) by Equation (5.1.15) where wi represents an arbitrary hidden unit or output unit weight,
the modified backprop procedure may escape local minima of the original criterion function E(w) given in
Equation (5.1.13). Here, the batch training is required since Equation (5.1.15) necessitates a unique error
surface for all patterns.

Figure 5.1.4. A plot of a one-dimensional criterion function E(w) with local minimum at w*. The function
E(w) − E(w*) is plotted below, as well as the global descent criterion function C(w, w*).

The update rule in Equation (5.1.15) automatically switches between two phases: A tunneling phase and a

gradient descent phase. The tunneling phase is characterized by . Since for this
condition the subenergy gradient term is nearly zero in the vicinity of the local minimum w*, the terminal
repeller term in Equation (5.1.15) dominates, leading to the dynamical system

(5.1.16)

This system has an unstable repeller equilibrium point at ; i.e., at the local minimum of E(w). The
"power" of this repeller is determined by the constant k. Thus, the dynamical system given by Equation
(5.1.15), when initialized with a small perturbation from w*, is repelled from this local minimum until it

reaches a lower energy region ; i.e., tunneling through portions of E(w) where

is accomplished. The second phase is a gradient descent minimization phase,

characterized by . Here, the repeller term is identically zero. Thus, Equation (5.1.15)
becomes

(5.1.17)

where (w) is a dynamic learning rate (step size) equal to . Note


that (w) is approximately equal to when E(w*) is larger than E(w)+.
Initially, w* is chosen as one corner of a domain in the form of a hyperparallelepiped of dimension

, which is the dimension of w in the architecture of Figure 5.1.1. A slightly

perturbated version of w*, namely , is taken as the initial state of the dynamical system in
Equation (5.1.15). Here w is a small perturbation which drives the system into the domain of interest. If

, the system immediately enters a gradient descent phase which equilibrates at a


local minimum. Every time a new equilibrium is reached, w* is set equal to this equilibrium and Equation

(5.1.15) is reinitialized with which assures a necessary consistency in the search flow direction.

Since w* is now a local minimum, holds in the neighborhood of w*. Thus, the system
enters a repelling (tunneling) phase, and the repeller at w* repels the system until it reaches a lower basin of

attraction where . As the dynamical system enters the next basin, the system
automatically switches to gradient descent and equilibrates at the next lower local minimum. We then set

w* equal to this new minimum and repeat the process. If, on the other hand, at
the onset of training, then the system is initially in a tunneling phase. The tunneling will proceed to a lower
basin, at which point it enters the minimization phase and follows the behavior discussed above. Training

can be stopped when a minimum w* corresponding to is reached, or when E(w*) becomes


smaller than a preset threshold.

The global descent method is guaranteed to find the global minimum for functions of one variable, but not
for multivariate functions. However, in the multidimensional case, the algorithm will always escape from
one local minimum to another with a lower or equal functional value. Figure 5.1.5 compares the learning
curve for the global descent-based backprop to that of batch backprop for the four-bit parity problem in a
feedforward net with four hidden units and a single output unit. The same initial random weights are used in
both cases. The figure depicts one tunneling phase for the global descent algorithm before convergence to a
(perfect) global minimum solution. In performing this simulation, it is found that the direction of the
perturbation vector w is very critical in regard to successfully reaching a global minimum. On the other
hand, batch backprop converges to the first local minimum it reaches. This local solution represents a
partial solution to the 4-bit parity problem (i.e., mapping error is present). Simulations using incremental
backprop with the same initial weights as in the above simulations are also performed, but are not shown in
the figure. Incremental backprop was able to produce both of the solutions shown in Figure 5.1.5; very
small learning rates (0 and n) often lead to imperfect local solutions, while relatively larger learning rates
may lead to a perfect solution.

Figure 5.1.5. Learning curves for global descent- and gradient descent-based batch backprop for the 4-bit
parity.

5.2 Backprop Enhancements and Variations

Learning with backprop is slow (Sutton, 1986 ; Huang and Lippmann, 1988). This is due to the
characteristics of the error surface. The surface is characterized by numerous flat and steep regions. In
addition, it has many troughs which are flat in the direction of search. These characteristics are particularly
pronounced in classification problems, especially when the size of the training set is small (Hush et al.,
1991).

Many enhancements of and variations to backprop have been proposed. These are mostly heuristic
modifications with goals of increased speed of convergence, avoidance of local minima, and/or
improvement in the network's ability to generalize. In this section, we present some common heuristics
which may improve these aspects of backprop learning in multilayer feedforward neural networks.

5.2.1 Weight Initialization

Due to its gradient descent nature, backprop is very sensitive to initial conditions. If the choice of the initial
weight vector w0 (here w is a point in the weight space being searched by backprop) happens to be located
within the attraction basin of a strong local minima attractor (one where the minima is at the bottom of a
steep-sided valley of the criterion/error surface), then the convergence of backprop will be fast and the
solution quality will be determined by the depth of the valley relative to the depth of the global minima. On
the other hand, backprop converges very slowly if w0 starts the search in a relatively flat region of the error
surface.

An alternative explanation for the sensitivity of backprop to initial weights (as well as to other learning
parameters) is advanced by Kolen and Pollack (1991). Using Monte Carlo simulations on simple
feedforward nets with incremental backprop learning of simple functions, they discovered a complex
fractal-like structure for convergence as a function of initial weights. They reported regions of high
sensitivity in the weight space where two very close initial points can lead to substantially different learning
curves. Thus, they hypothesize that these fractal-like structures arise in backprop due to the nonlinear nature
of the dynamic learning equations which exhibit multiple attractors; rather than the gradient descent
metaphor with local valleys to get stuck in, they advance a many-body metaphor where the search trajectory
is determined by complex interactions with the systems attractors.

In practice, the weights are normally initialized to small zero-mean random values (Rumelhart et al.,
1986b). The motivation for starting from small weights is that large weights tend to prematurely saturate
units in a network and render them insensitive to the learning process (Hush et al., 1991 ; Lee et al., 1991)
(this phenomenon is known as "flat spot" and is considered in Section 5.2.4). On the other hand,
randomness is introduced as a symmetry breaking mechanism; it prevents units from adopting similar
functions and becoming redundant.

A sensible strategy for choosing the magnitudes of the initial weights for avoiding premature saturation is to
choose them such that an arbitrary unit i starts with a small and random weighted sum neti. This may be

achieved by setting the initial weights of unit i to be on the order of where fi is the number of inputs
(fan-in) for unit i (Wessels and Barnard, 1992). It can be easily shown that for zero-mean random uniform

weights in and assuming normalized inputs which are randomly and uniformly distributed in the

range , neti has zero-mean and has standard deviation . Thus, by generating uniform

random weights within the range , the input to unit i (neti) is a random variable with
zero mean and a standard deviation of unity, as desired.

In simulations involving single hidden layer feedforward networks for pattern classification and function
approximation tasks, substantial improvements in backprop convergence speed and avoidance of "bad"
local minima are possible by initializing the hidden unit weight vectors to normalized vectors selected
randomly from the training set (Denoeux and Lengellé, 1993).

5.2.2 Learning Rate


The convergence speed of backprop is directly related to the learning rate parameter (o and h in Equations
(5.1.2) and (5.1.9), respectively); if is small, the search path will closely approximate the gradient path, but
convergence will be very slow due to the large number of update steps needed to reach a local minima. On
the other hand, if is large, convergence will initially be very fast, but the algorithm will eventually oscillate
and thus not reach a minimum. In general, it is desirable to have large steps when the search point is far
away from a minimum with decreasing step size as the search approaches a minimum. In this section, we
give a sample of the various approaches for selecting the proper learning rate.

One early proposed heuristic (Plaut et al., 1986) is to use constant learning rates which are inversely
proportional to the fan-in of the corresponding units. Extensions of the idea of fan-in dependence of the
learning rate have also been proposed by (Tesauro and Janssens, 1988). The increased convergence speed
of backprop due to the utilization of the above method of setting the individual learning rates for each unit
inversely proportional to the number of inputs to that unit has been theoretically justified by analyzing the
eigenvalue distribution of the Hessian matrix of the criterion function, (Le Cun et al., 1991a). Such
learning rate normalization can be intuitively thought of as maintaining balance between the learning speed
of units with different fan-in. Without this normalization, after each learning iteration, units with high fan-in
have their input activity (net) changed by a larger amount than units with low fan-in. Thus, and due to the
nature of the sigmoidal activation function used, the units with large fan-in tend to commit their output to a
saturated state prematurely and are rendered difficult to adapt (see Section 5.2.4 for additional discussion).
Therefore, normalizing the learning rates of the various units by dividing by their corresponding fan-in
helps speed up learning.

The optimal learning rate for fast convergence of backprop/gradient descent search is the inverse of the
largest eigenvalue of the Hessian matrix H of the error function E, evaluated at the search point w.
Computing the full Hessian matrix is prohibitively expensive for large networks with thousands of
parameters are involved. Therefore, finding the largest eigenvalue max for speedy convergence seems rather
inefficient. However, one may employ a shortcut to efficiently estimate max (Le Cun et al., 1993). This
shortcut is based on a simple way of approximating the product of H by an arbitrarily chosen (random)

vector z through Taylor expansion: where is a small positive


constant. Now, using the power method, which amounts to iterating the procedure

, the vector z converges to where cmax is the


normalized eigenvector of H corresponding to max. Thus, the norm of the converged vector z gives a good
estimate of |max|, and its reciprocal may now be used as the learning rate in backprop. An on-line version of
this procedure is reported by Le Cun et al. (1993).

Many heuristics have been proposed so as to adapt the learning rate automatically. Chan and Fallside

(1987) proposed an adaptation rule for that is based on the cosine of the angle between the gradient

vectors and (here, t is an integer which represents iteration number). Sutton (1986)

presented a method which can increase or decrease for each weight wi according to the number of

sign changes observed in the associated partial derivative . This method was also studied empirically

by Jacobs (1988). Franzini (1987) investigated a technique that heuristically adjusts , increasing it

whenever is close to and decreasing it otherwise. Cater (1987) suggested using

separate parameters, , one for each pattern xk. Silva and Almeida (1990, see also Vogl et al., 1988)

used a method where the learning rate for a given weight wi is set to if and
have the same sign, with ; if the partial derivatives have different signs, then a learning rate of

is used, with . A similar, theoretically justified method for increasing the convergence

speed of incremental gradient descent search is to set (t) = (t − 1) if E(t) has the same sign as ,

and otherwise (Pflug, 1990).

When the input vectors are assumed to be randomly and independently chosen from a probability
distribution, we may view incremental backprop as a stochastic gradient descent algorithm, along the lines
of the theory in Section 4.2.2. Thus, simply setting the learning rate to a constant results in persistent
residual fluctuations around a local minimum w*. The variance of such fluctuations depends on the size of ,
the criterion function being minimized, and the training set. Based on results from stochastic approximation

theory (Ljung, 1977), the "running average" schedule , with sufficiently small 0, guarantees
asymptotic convergence to a local minimum w*. However, this schedule leads to very slow convergence.

Here, one would like to start the search with a learning rate faster than but then ultimately converge to

the rate as w* is approached. Unfortunately, increasing 0 can lead to instability for small t. Darken and

Moody (1991) proposed the "search then converge" schedule which allows for faster
convergence without compromising stability. In this schedule, the learning rate stays relatively high for a
"search time" during which it is hoped that the weights will hover about a good minimum. Then, for times

, the learning rate decreases as and the learning converges. Note that for , this schedule
reduces to the running average schedule. So a procedure for optimizing is needed. A completely automatic
"search then converge" schedule can be found in Darken and Moody (1992).

5.2.3 Momentum

Another simple approach to speed up backprop is through the addition of a momentum term (Plaut et al.,
1986) to the right-hand side of the weight update rules in Equations (5.1.2) and (5.1.9). Here, each weight
change wi is given some momentum so that it accelerates in the average downhill direction, instead of

fluctuating with every change in the sign of the associated partial derivative . The addition of
momentum to gradient search is formally stated as

(5.2.1)

where α is a momentum rate normally chosen between 0 and 1 and .


Equation (5.2.1) is a special case of multi-stage gradient methods which have been proposed for
accelerating convergence (Wegstein, 1958) and escaping local minima (Tsypkin, 1971).
The momentum term can also be viewed as a way of increasing the effective learning rate in almost-flat
regions of the error surface while maintaining a learning rate close to ρ (here 0 ρ 1) in regions with high
fluctuations. This can be seen by employing an N-step recursion and writing (5.2.1) as

(5.2.2)

If the search point is caught in a flat region, then will be about the same at each time-step and Equation
(5.2.2) can be approximated as (with 0 α 1 and N large)

(5.2.3)

Thus, for flat regions, a momentum term leads to increasing the learning rate by a factor . On the
other hand, if the search point is in a region of high fluctuation, the weight change will not gain momentum;
i.e., the momentum effect vanishes. An empirical study of the effects of ρ and α on the convergence of
backprop and on its learning curve can be found in Tollenaere (1990).

Adaptive momentum rates may also be employed. Fahlman (1989) proposed and extensively simulated a
heuristic variation of backprop, called quickprop, which employs a dynamic momentum rate given by

(5.2.4)

With this adaptive α (t) substituted in (5.2.1), if the current slope is persistently smaller than the previous
one but has the same sign, then α (t) is positive and the weight change will accelerate. Here, the
acceleration rate is determined by the magnitude of successive differences between slope values. If the
current slope is in the opposite direction from the previous one, it signals that the weights are crossing over
a minimum. In this case, α (t) has a negative sign and the weight change starts to decelerate. Additional
heuristics are used to handle the undesirable case where the current slope is in the same direction as the
previous one, but has the same or larger magnitude; otherwise, this scenario would lead to taking an infinite
step or moving the search point backwards, or up the current slope and toward a local maximum.
Substituting Equation (5.2.4) in (5.2.1) leads to the update rule

(5.2.5)
It is interesting to note that Equation (5.2.5) corresponds to steepest gradient descent-based adaptation with
a dynamically changing effective learning rate (t). This learning rate is given by the sum of the original
constant learning rate and the reciprocal of the denominator of the second term in the right-hand side of
Equation (5.2.5).

The use of error gradient information at two consecutive time steps in Equation (5.2.4) to improve
convergence speed can be justified as being based on approximations of second-order search methods such
as Newton's method. The Newton method (e.g., Dennis and Schnabel, 1983) is based on a quadratic model
of the criterion E(w) and hence uses only the first three terms in a Taylor series expansion of E
about the "current" weight vector wc:

This quadratic function is minimized by solving the equation which leads to

Newton's method: . Here H is the

Hessian matrix with components .

Newton's algorithm iteratively computes the weight changes w and works well when initialized within a
convex region of E. In fact, the algorithm converges quickly if the search region is quadratic or nearly so.
However this method is very computationally expensive since the computation H−1 requires O(N3)
operations at each iteration (here, N is the dimension of the search space). Several authors have suggested
computationally efficient ways of approximating Newton's method (Parker, 1987 ; Ricotti et al., 1988 ;
Becker and Le Cun, 1989). Becker and Le Cun proposed an approach whereby the off-diagonal elements of
H are neglected, thus arriving at the approximation

(5.2.6)

which is a "decoupled" form of Newton's rule where each weight is updated separately. The second term in
the right-hand side of Equation (5.2.5) can now be viewed as an approximation of Newton's rule, since its
denominator is a crude approximation of the second derivative of E at step t. In fact, this suggests that the
weight update rule in Equation (5.2.5) may be used with = 0.

As with Equation (5.2.4), special heuristics must be used in order to prevent the search from moving in the
wrong gradient direction and in order to deal with regions of very small curvature, such as inflection points

and plateaus, which cause wi in Equation (5.2.6) to blow-up. A simple solution is to replace the term

in Equation (5.2.6) by where µ is a small positive constant. The approximate Newton


method described above is capable of scaling the descent step in each direction. However, because it
neglects off-diagonal Hessian terms, it is not able to rotate the search direction as in the exact Newton's
method. Thus, this approximate rule is only efficient if the directions of maximal and minimal curvature of
E happen to be aligned with the weight space axes. Bishop (1992) reported a somewhat efficient technique
for computing the elements of the Hessian matrix exactly, using multiple feedforward propagation through
the network, followed by multiple backward propagation.
Another approach for deriving theoretically justifiable update schedules for the momentum rate in Equation
(5.2.1) is to adjust α (t) at each update step such that the gradient descent search direction is "locally"
optimal. In "optimum" steepest descent (also known as best-step steepest descent), the learning rate is set at
time t such that it minimizes the criterion function E at time step t + 1; i.e., we desire a ρ which minimizes

. Unfortunately, this optimal learning step is impractical


since it requires the computation of the Hessian 2E at each time step (refer to Problem 5.2.12 for an
expression for the optimal ). However, we may still use some of the properties of the optimal in order to
accelerate the search, as we demonstrate next.

When w(t) is specified, the necessary condition for minimizing E[w(t+1)] is (Tompkins, 1956; Brown,
1959)

(5.2.7)

This implies that the search direction in two successive steps of optimum steepest descent are orthogonal.
The easiest method to enforce the orthogonal requirement is the Gram-Schmidt orthogonalization method.
Suppose that we know the search direction at time t − 1, denoted d(t − 1), and that we compute the "exact"
gradient E(t) (used in batch backprop) at time step t [to simplify notation, we write E[w(t)] as E(t)]. Now,
we can satisfy the condition of orthogonal consecutive search directions by computing a new search
direction, employing Gram-Schmidt orthogonolization (Yu et al., 1993)

(5.2.8)

Performing descent search in the direction d(t) in Equation (5.2.8) leads to the weight vector update rule

(5.2.9)

where the relation w(t − 1) = w(t) − w(t − 1) = +ρ d(t − 1) has been used. Comparing the component-wise
weight update version of Equation (5.2.9) to Equation (5.2.1) reveals another adaptive momentum rate
given by

Another similar approach is to set the current search direction d(t) to be a compromise between the current

"exact" gradient E(t) and the previous search direction d(t − 1), i.e., , with

. This is the basis for the conjugate gradient method in which the search direction is
chosen (by appropriately setting β ) so that it distorts as little as possible the minimization achieved by the
previous search step. Here, the current search direction is chosen to be conjugate (with respect to H) to the

previous search direction. Analytically, we require d(t − 1)T H(t − 1)d(t) = 0 where the Hessian is
assumed to be positive-definite. In practice, β , which plays the role of an adaptive momentum, is chosen
according to the Polack-Ribiére rule (Polack and Ribiére, 1969 ; Press et al., 1986) :

Thus, the search direction in the conjugate gradient method at time t is given by

Now, using and substituting the above expression for d(t) in w(t) = d(t) leads to
the weight update rule:

When E is quadratic, the conjugate gradient method theoretically converges in N or fewer iterations. In
general, E is not quadratic and therefore this method would be slower than what the theory predicts.
However, it is reasonable to assume that E is approximately quadratic near a local minimum. Therefore,
conjugate gradient descent is expected to accelerate the convergence of backprop once the search enters a
small neighborhood of a local minimum. As a general note, the basic idea of conjugate gradient search was
introduced by Hestenes and Stiefel (1952). Beckman (1964) gives a good account of this method. Battiti
(1992) and van der Smagt (1994) gave additional characterization of second-order backprop (such as
conjugate gradient-based backprop) from the point of view of optimization. The conjugate gradient method
has been applied to multilayer feedforward neural net training (Kramer and Sangiovanni-Vincentelli, 1989 ;
Makram-Ebeid et al., 1989 ; van der Smagt, 1994) and is shown to outperform backprop in speed of
convergence.

It is important to note that the above second-order modifications to backprop improve the speed of
convergence of the weights to the "closest" local minimum. This faster convergence to local minima is the
direct result of employing a better search direction as compared to incremental backprop. On the other
hand, the stochastic nature of the search directions of incremental backprop and its fixed learning rates can
be an advantage since it allows the search to escape shallow local minima, which generally leads to better
solution quality. These observations suggest the use of hybrid learning algorithms (Møller, 1990 ; Gorse
and Shepherd, 1992) where one starts with incremental backprop and then switches to conjugate gradient-
based backprop for the final convergence phase. This hybrid method has its roots in a technique from
numerical analysis known as Levenberg-Marquardt optimization (Press et al., 1986).

As a historical note, we mention that the concept of gradient descent was first introduced by Cauchy (1847)
for use in the solution of simultaneous equations; the method has enjoyed popularity ever since. It should
also be noted that some of the above enhancements to gradient search date back to the fifties and sixties and
are discussed in Tsypkin (1971). Additional modifications of the gradient descent method which enhances
its convergence to global minima are discussed in Section 8.1. For a good survey of gradient search the
reader is referred to the book by Polyak (1987).
5.2.4 Activation Function

As indicated earlier in this section, backprop suffers from premature convergence of some units to flat
spots. During training, if a unit in a multilayer network receives a weighted signal net with a large
magnitude, this unit outputs a value close to one of the saturation levels of its activation function. If the
corresponding target value (desired target value for an output unit, or an unknown "correct" hidden target
for a hidden unit) is substantially different from that of the saturated unit, we say that the unit is incorrectly
saturated or has entered a flat spot. When this happens, the size of the weight update due to backprop will
be very small even though the error is relatively large, and it will take an excessively long time for such
incorrectly saturated units to reverse their states. This situation can be explained by referring to Figure
5.2.1., where the activation function f(net) = tanh(β net) and its derivative f ' [given in Equation (5.1.12)]
are plotted for β = 1.

Figure 5.2.1. Plots of f(net) = tanh(net) and its derivative f '(net).

Here, when net has large magnitude, f ' approaches zero. Thus, the weight change approaches zero in
Equations (5.1.2) and (5.1.9) even when there is a large difference between the actual and desired/correct
output for a given unit.

A simple solution to the flat spot problem is to bias the derivative of the activation function (Fahlman,
1989); i.e., replace fo' and fh' in Equation (5.1.2) and (5.1.9) by fo' + ε and fh' + ε , respectively (a typical
value for ε is 0.1). Hinton (1987a) suggested the use of a nonlinear error function that goes to infinity at
the points where f ' goes to zero, resulting in a finite non-zero error value (see Franzini (1987) for an
example of using such an error function). The entropic criterion of Chapter 3, Equation (3.1.76), is a good
choice for the error function since it leads to an output unit update rule similar to that of Equation (5.1.2)
but without the fo' term (note, however, that the update rule for the hidden units would still have the
derivative term. See Equation (5.2.18) in Section 5.2.6). One may also modify the basic sigmoid activation
function in backprop in order to reduce flat spot effects. The use of the homotopy activation function

is one such example (Yang and Yu, 1993). Here, forms a

homotopy between a linear and a sigmoid function with . Initially, is set to 1; that is, all nodes
have linear activations. Backprop is used to achieve a minimum in E, then is decreased (monotonically) and
backprop is continued until is zero. That is, the activation function recovers its sigmoidal nature gradually

as training progresses. Since and is nonzero for most of the training phase, flat spot effects
are eliminated. Besides reducing the effects of flat spot, the homotopy function also helps backprop escape

some local minima. This can be seen by noting that when = 1, the error function is a polynomial

of w which has a relatively smaller number of local minima than . Because we have

achieved a minimum point of , which can provide a relatively better initial point for

minimizing , many unwanted local minima are avoided. An alternative explanation of the
effect of a gradually increasing activation function slope on the avoidance of local minima is given in
Section 8.4 based on the concept of mean-field annealing.
Another method for reducing flat spot effects involves dynamically updating the activation slope (λ and β
in Equations (5.1.11) and (5.1.12), respectively) such that the slope of each unit is adjusted, independently,
in the direction of reduced output error (Tawel, 1989 ; Kufudaki and Horejs, 1990 ; Rezgui and
Tepedelenlioglu, 1990 ; Kruschke and Movellan, 1991; Sperduti and Starita, 1991, 1993). Gradient descent
on the error surface in the activation function's slope space leads to the following update rules (assuming
hyperbolic tangent activation functions)

(5.2.10)

and

(5.2.11)

for the lth output unit and the jth hidden unit, respectively. Here, o and h are small positive constants.
Typically, when initialized with slopes near unity, Equations (5.2.10) and (5.2.11) reduce the activation
slopes toward zero, which increases the effective dynamic range of the activation function which, in turn,
reduces flat spot effects and therefore allows the weights to update rapidly in the initial stages of learning.
As the algorithm begins to converge, the slope starts to increase and thus restores the saturation properties
of the units. It is important to note here that the slope adaptation process just described becomes a part of
the backprop weight update procedure; the slopes are updated after every weight update step.

Other nonsigmoid activation functions may be utilized as long as they are differentiable (Robinson et al.,
1989). From the discussion on the approximation capabilities of multilayer feedforward networks in Section
2.3, a wide range of activation functions may be employed without compromising the universal
approximation capabilities of such networks. However, the advantages of choosing one particular class of
activation functions (or a mixture of various functions) is not completely understood. Moody and Yarvin
(1992) reported an empirical study where they have compared feedforward networks with a single hidden
layer feeding into a single linear output unit, each network employing different type of differentiable
nonlinear activation function. The types of activation functions considered by Moody and Yarvin included
the sigmoid logistic function, polynomials, rational functions (ratios of polynomials), and Fourier series
(sums of cosines). Benchmark simulations on a few data sets representing noisy data with only mild
nonlinearity and noiseless data with a high degree of nonlinearity were performed. It was found that the
networks with nonsigmoidal activations attained superior performance on the highly nonlinear noiseless
data. On the set of noisy data with mild nonlinearity, however, polynomials did poorly, whereas rationals
and Fourier series showed better performance and were comparable to sigmoids.

Other methods for improving the training speed of feedforward multilayer networks involve replacing the
sigmoid units by Gaussian or other units. These methods are covered in Chapter 6.

5.2.5 Weight Decay, Weight Elimination, and Unit Elimination

In Chapter 4 (Section 4.8), we saw that in order to guarantee good generalization, the number of degrees of
freedom or number of weights (which determines a network's complexity) must be considerably smaller
than the amount of information available for training. Some insight into this matter can be gained from
considering an analogous problem in curve fitting (Duda and Hart, 1973 ; Wieland and Leighton, 1987).

For example, consider the rational function which is plotted in Figure 5.2.2
(solid line). And, assume that we are given a set of 15 samples (shown as small circles) from which we are
to find a "good" approximation to g(x). Two polynomial approximations are shown in Figure 5.2.2: An
eleventh-order polynomial (dashed line) and an eighth-order polynomial (dotted line). These
approximations are computed by minimizing the SSE criterion over the sample points. The higher order
polynomial has about the same number of parameters as the number of training samples, and thus is shown
to give a very close fit to the data, this is referred to as " memorization." However, it is clear from the figure
that this polynomial does not provide good "generalization" (i.e., it does not provide reliable interpolation
and/or extrapolation) over the full range of the data. On the other hand, fitting the data by an eighth-order
polynomial leads to relatively better overall interpolations over a wider range of x values (refer to the dotted
line in Figure 5.2.2). In this case, the number of free parameters is equal to nine which is smaller than the
number of training samples. This "undetermined" nature leads to an approximation function that better
matches the "smooth" function g(x) being approximated. Trying to use a yet lower order polynomial (e.g.,
fifth-order or less) leads to a poor approximation because this polynomial would not have sufficient
"flexibility" to capture the nonlinear structure in g(x).

Figure 5.2.2. Polynomial approximation for the function (shown as a solid


line), based on the 15 samples shown (small circles). The objective of the approximation is to minimize the
sum of squared error criterion. The dashed line represents an eleventh-order polynomial. A better overall
approximation for g(x) is given by an eighth-order polynomial (dotted line).

The reader is advised to consider the nature and complexity of this simple approximation problem by
carefully studying Figure 5.2.2. Here, the total number of possible training samples of the form (x, g(x)) is
uncountably infinite. From this huge set of potential data, though, we close only 15 samples to try to
approximate the function. In this case, the approximation involved minimizing an SSE criterion function
over these few sample points. Clearly, however, a solution which is globally (or near globally) optimal in
terms of sum-squared error over the training set (for example, the eleventh order polynomial) may be hardly
appropriate in terms of interpolation (generalization) between data points. Thus, one should choose a class
of approximation functions which penalizes unnecessary fluctuations between training sample points.
Neural networks satisfy this approximation property and are thus superior to polynomials in approximating
arbitrary nonlinear functions from sample points (see further discussion given below). Figure 5.2.3 show
the results of simulations involving the approximation of the function g(x), with the same set of samples
used in the above simulations, using single hidden layer feedforward neural nets. Here, all hidden units
employ the hyperbolic tangent activation function (with a slope of 1), and the output unit is linear. These
nets are trained using the incremental backprop algorithm [given by Equations (5.1.2) and (5.1.9)] with
and h = 0.01. Weights are initialized randomly and uniformly over the range [−0.2, +0.2]. The
training was stopped when the rate of change of the SSE became insignificantly small. The dotted line in
Figure 5.2.3 is for a net with three hidden units (which amounts to 10 degrees of freedom). Surprisingly,
increasing the number of hidden units to 12 units (37 degrees of freedom) improved the quality of the fit as
shown by the dashed line in the figure. By comparing Figures 5.2.3 and 5.2.2, it is clear that the neural net
approximation for g(x) is superior to that of polynomials in terms of accurate interpolation and
extrapolation.
Figure 5.2.3. Neural network approximation for the function (shown as a solid
line). The dotted line was generated by a 3-hidden unit feedforward net. The dashed line, which is shown to
have substantial overlap with g(x), was generated by a 12-hidden unit feedforward net. In both cases,
standard incremental backprop training was used.

The generalization superiority of the neural net can be attributed to the bounded and smooth nature of the
hidden unit responses, as compared to the potentially divergent nature of polynomials. The bounded unit
response localizes the nonlinear effects of individual hidden units in a neural network and allows for the
approximations in different regions of the input space to be independently tuned. This approximation
process is similar in its philosophy to the traditional spline technique for curve fitting (Schumaker, 1981).
Hornik et al. (1990) gave related theoretical justification for the usefulness of feedforward neural nets with
sigmoidal hidden units in function approximation. They showed that, in addition to approximating the
training set, the derivative of the output of the network evaluated at the training data points is also a good
approximation of the derivative of the unknown function being approximated. This result explains the good
extrapolation capability of neural nets observed in simulations. For example, the behavior of the neural net
output shown in Figure 5.2.3 for x > 10 and x < −5 is a case in point. It should be noted, though, that in most
practical situations the training data is noisy. Hence, an exact fit of this data must be avoided, which means
that the degrees of freedom of a neural net approximator must be constrained. Otherwise, the net will have a
tendency for overfitting. This issue is explored next.

Once we decide on a particular approximation function or network architecture, generalization can be


improved if the number of free parameters in the net is optimized. Since it is difficult to estimate the
optimal number of weights (or units) a priori, there has been much interest in techniques that automatically
remove excess weights and/or units from a network. These techniques are sometimes referred to as network
" pruning" algorithms and are surveyed in Reed (1993).

One of the earliest and simplest approaches to remove excess degrees of freedom from a neural network is
through the use of simple weight decay (Plaut et al., 1986 ; Hinton, 1986) in which each weight decays
towards zero at a rate proportional to its magnitude, so that connections disappear unless reinforced. Hinton
(1987b) gave empirical justification by showing that such weight decay improves generalization in
feedforward networks. Krogh and Hertz (1992) gave some theoretical justification for this generalization
phenomena.

Weight decay in the weight update equations of backprop can be accounted for by adding a complexity
(regularization) term to the criterion function E that penalizes large weights,

(5.2.12)

Here, represents the relative importance of the complexity term with respect to the error term E(w) [note
that the second term in Equation (5.2.12) is a regularization term as in Equation (4.1.3)]. Now, gradient
search for minima of J(w) leads to the following weight update rule

(5.2.13)

which shows an exponential decay in wi if no learning occurs. Because it penalizes more weights than
necessary, the criterion function in Equation (5.1.12) overly discourages the use of large weights where a
single large weight costs much more than many small ones. Weigend et al. (1991) proposed a procedure of
weight-elimination given by minimizing
(5.2.14)

where the penalty term on the right-hand side helps regulate weight magnitudes and w0 is a positive free
parameter which must be determined. For large w0, this procedure reduces to the weight decay procedure
described above and hence favors many small weights, whereas if w0 is small, fewer large weights are

favored. Also, note that when , the cost of the weight approaches one (times ) which justifies
the interpretation of the penalty term as a counter of large weights. In practice, a w0 close to unity is used. It
should be noted that the above weight elimination procedure is very sensitive to the choice of . A heuristic
for adjusting dynamically during learning is described in Weigend et al. (1991). For yet other forms of the
complexity term, the reader is referred to Nowlan and Hinton (1992a and 1992b), and Section 5.2.7.

The above ideas have been extended to unit elimination (e.g., see Hanson and Pratt, 1989 ; Chauvin, 1989 ;
Hassoun et al., 1990). Here, one would start with an excess of hidden units and dynamically discard
redundant ones. As an example, one could penalize redundant units by replacing the weight decay term in

Equation (5.2.13) by for all weights of hidden units, which leads to the hidden
unit update rule

(5.2.15)

Generalization in feedforward networks can also be improved by utilizing network construction procedures,
as opposed to weight or unit pruning. Here, we start with a small network and allow it to grow gradually
(add more units) in response to incoming data. The idea is to keep the network as small as possible. In
Chapter 6 (Section 6.3) three adaptive networks having unit-allocation capabilities are discussed. Further
details on network construction procedures can be found in Marchand et al. (1990) , Frean (1990) , Fahlman
and Lebiere (1990) , and Mézard and Nadal (1989).

5.2.6 Cross Validation

An alternative or complementary strategy to the above methods for improving generalization in


feedforward neural networks is suggested by findings based on empirical results (Morgan and Bourlard,
1990; Weigend et al., 1991 ; Hergert et al., 1992). In simulations involving backprop training of
feedforward nets on noisy data, it is found that the validation (generalization) error decreases monotonically
to a minimum but then starts to increase, even as the training error continues to decrease. This phenomenon
is depicted in the conceptual plot in Figure 5.2.4, and is illustrated through the computer simulation given
next.
Figure 5.2.4. Training error (dashed curve) and validation error (solid curve) encountered in training
multilayer feedforward neural nets using backprop.

Consider the problem of approximating the rational function g(x) plotted in Figure 5.2.3, from a set of noisy
sample points. This set of points is generated from the 15 perfect samples, shown in Figure 5.2.3, by adding
zero-mean normally distributed random noise whose variance is equal to 0.25. A single hidden layer
feedforward neural net is used with 12 sigmoidal hidden units and a single linear output unit. It employs

incremental backprop training with o = 0.05, h = 0.01, and initial random weights in . After
80 training cycles on the 15 noisy samples, the net is tested for uniformly sampled inputs x in the range
[−8, 12]. The output of this 80-cycle net is shown as a dashed line in Figure 5.2.5. Next, the training
continued and then stopped after 10,000 cycles. The output of the resulting net is shown as a dotted line in
the figure. Comparing the two approximations in Figure 5.2.5 leads to the conclusion that the partially
trained net is superior to the excessively trained net in terms of overall interpolation and extrapolation
capabilities. Further insight into the dynamics of the generalization process for this problem can be gained
from Figure 5.2.6. Here, the validation RMS error is monitored by testing the net on a validation set of 294
perfect samples, uniformly spaced in the interval [−8, 12], after every 10 training cycles. This validation
error is shown as the dashed line in Figure 5.2.6. The training error (RMS error on the training set of 15
points) is also shown in the figure as a solid line. Note that the optimal net in terms of overall generalization
capability is the one obtained after about 80 to 90 training cycles. Beyond this training point, the training
error keeps decreasing, while the validation error increases. It is interesting to note the non-monotonic
behavior of the validation error between training cycles 2000 and 7000. This suggests that, in general,
multiple local minima may exist in the validation error curves of backprop trained feedforward neural
networks. The location of these minima is a complex function of the network size, weight initialization, and
learning parameters. To summarize, when training with noisy data, excessive training usually leads to
overfitting. On the other hand, partial training may lead to a better approximation of the unknown function
in the sense of improved interpolation and, possibly, improved extrapolation.

Figure 5.2.5. Two different neural network approximations of the rational function g(x), shown as a solid
line, from noisy samples. The training samples shown are generated from the 15 perfect samples in Figure
5.2.3 by adding zero-mean normally distributed random noise with 0.25 variance. Both approximations
resulted from the same net with 12 hidden units and incremental backprop learning. The dashed line
represents the output of the net after 80 learning cycles. After completing 10,000 learning cycles, the same
net generates the dotted line output.

Figure 5.2.6. Training and validation RMS errors for the neural net approximation of the function g(x). The
training set consists of the 15 noisy samples in Figure 5.2.5. The validation set consists of 294 perfect
samples uniformly spaced in the interval [−8, 12]. The validation error starts lower than the training error
mainly because perfect samples are used for validation.

The generalization phenomenon depicted in Figure 5.2.4 (and illustrated by the simulation in Figures 5.2.5
and 5.2.6) does not currently have a complete theoretical justification. A qualitative explanation for it was
advanced by Weigend et al. (1991). They explain that to a first approximation, backprop initially adapts the
hidden units in the network such that they all attempt to fit the major features of the data. Later, as training
proceeds, some of the units then start to fit the noise in the data. This later process continues as long as
there is error and as long as training continues (this is exactly what happens in the simulation of Figure
5.2.5). The overall process suggests that the effective number of free parameters (weights) starts small
(even if the network is oversized) and gets larger approaching the true number of adjustable parameters in
the network as training proceeds. Baldi and Chauvin (1991) derived analytical results on the behavior of the
validation error in LMS-trained single layer feedforward networks learning the identity map from noisy
autoassociation pattern pairs. Their results agree with the above generalization phenomenon in nonlinear
multilayer feedforward nets.

Therefore, a suitable strategy for improving generalization in networks of non-optimal size is to avoid "
overtraining" by carefully monitoring the evolution of the validation error during training and stopping just
before it starts to increase. This strategy is based on one of the early criteria in model evaluation known as
cross-validation (e.g., see Stone, 1978). Here, the whole available data set is split into three parts: Training
set, validation set, and prediction set. The training set is used to determine the values of the weights of the
network. The validation set is used for deciding when to terminate training. Training continues as long as
the performance on the validation set keeps improving. When it ceases to improve, training is stopped. The
third part of the data, the prediction set, is used to estimate the expected performance (generalization) of the
trained network on new data. In particular, the prediction set should not be used for validation during the
training phase. Note that this heuristic requires the application to be data-rich. Some applications, though,
suffer from scarcity of training data which makes this method inappropriate. The reader is referred to
Finnoff et al., (1993) for an empirical study of cross-validation-based generalization and its comparison to
weight decay and other generalization-inducing methods.

See how it works interactively

5.2.7 Criterion Functions

As seen earlier in Section 3.1.5, other criterion/error functions can be used from which new versions of the
backprop weight update rules can be derived. Here, we consider two such criterion functions: (1) Relative
entropy and (2) Minkowski-r<. Starting from the instantaneous entropy criterion ( Baum and Wilczek,
1988)
(5.2.16)

and employing gradient descent search, we may obtain the learning equations

(5.2.17)

and

(5.2.18)

for the output and hidden layer units, respectively. Equations (5.2.17) and (5.2.18) assume hyperbolic
tangent activations at both layers.

From Equation (5.2.17) we see that the fo' term present in the corresponding equation of standard backprop
[Equation (5.1.2)] has now been eliminated. Thus, the output units do not have a flat spot problem; on the
other hand, fh' still appears in Equation (5.2.18) for the hidden units (this derivative appears implicitly as the

term in the standard backprop equation). Therefore, the flat spot problem is only partially
solved by employing the entropy criterion.

The entropy-based backprop is well suited to probabilistic training data. It has a natural interpretation in
terms of learning the correct probabilities of a set of hypotheses represented by the outputs of units in a
multilayer neural network. Here, the probability that the lth hypothesis is true given an input pattern xk is

determined by the output of the lth unit as . The entropy criterion is a " well formed" error
function ( Wittner and Denker, 1988); the reader is referred to Section 3.1.5 for a definition and discussion
of "well formed" error functions. Such functions have been shown in simulations to converge faster than
standard backprop ( Solla et al., 1988).

Another choice is the Minkowski-rcriterion function ( Hanson and Burr, 1988):

(5.2.19)

which leads to the following weight update equations:

(5.2.20)

and

(5.2.21)
where sgn is the sign function. These equations reduce to those of standard backprop for the case r = 2. The
motivation behind the use of this criterion is that it can lead to maximum likelihood estimation of weights
for Gaussian and non-Gaussian input data distributions by appropriately choosing r (e.g., r = 1 for data with
Laplace distributions). A small r (1 r < 2) gives less weight for large deviations and tends to reduce the
influence of outlier-points in the input space during learning. On the other hand, when noise is negligible,
the sensitivity of the separating surfaces implemented by the hidden units to the geometry of the problem
may be increased by employing r > 2. Here, fewer hidden units are recruited when learning complex
nonlinearly separable mappings for larger r values ( Hanson and Burr, 1988).

If no a priori knowledge is available about the distribution of the training data, it would be difficult to
estimate a value for r, unless extensive experimentation with various r values (e.g., r = 1.5, 2, 3) is done.
Alternatively, an automatic method for estimating r is possible by adaptively updating r in the direction of
decreasing E. Here, steepest gradient descent on E(r) results in the update rule

(5.2.22)

which when restricting r to be strictly greater than 1 (metric error measure case) may be approximated as

(5.2.23)

Note that it is important that the r update rule be invoked much less frequently than the weight update rule
(for example, r is updated once every 10 training epochs of backprop).

The idea of increasing the learning robustness of backprop in noisy environments can be placed in a more
general statistical framework ( White, 1989) where the technique of robust statistics ( Huber, 1981) takes
effect. Here, robustness of learning refers to insensitivity to small perturbations in the underlying
probability distribution p(x) of the training set. These statistical techniques motivate the replacement of the

linear error in Equations (5.1.2) and (5.1.9) by a nonlinear error suppressor function
which is compatible with the underlying probability density function p(x). One example is to set

with 1 r < 2. This error suppresser leads to exactly the Minkowski-r


weight update rule of Equations (5.2.20) and (5.2.21). In fact, the case r = 1 is equivalent to minimizing the
summed absolute error criterion which is known to suppress outlier data points. Similarly, the selection

( Kosko, 1992) leads to robust backprop if p(x) has long tails such as a Cauchy
distribution or some other infinite variance density.

Furthermore, regularization terms may be added to the above error functions E(w) in order to introduce
some desirable effects such as good generalization, smaller effective network size, smaller weight
magnitudes, faster learning, etc. ( Poggio and Girosi, 1990a; Mao and Jain, 1993). The regularization terms
in Equations (5.2.12) and (5.2.14) used for enhancing generalization through weight pruning/elimination are

examples. Another possible regularization term is ( Drucker and Le Cun, 1992) which has been
shown to improve backprop generalization by forcing the output to be insensitive to small changes in the
input. It also helps speed up convergence by generating hidden layer weight distributions that have smaller
variances than that generated by standard backpropagation. (Refer to Problem 5.2.11 for yet another form of
regularization.)

Weight-sharing, a method where several weights in a network are controlled by a single parameter, is
another way for enhancing generalization ( Rumelhart et al., 1986b, also see Section 5.3.3 for an
application). It imposes equality constraints among weights, thus reducing the number of free (effective)
parameters in the network which leads to improved generalization. An automatic method for affecting
weight sharing can be derived by adding the regularization term ( Nowlan and Hinton, 1992a and 1992b):

(5.2.24)

to the error function, where each pj(wi) is a Gaussian density with mean j and variance j, the j is the mixing

proportion of Gaussian pj with , and wi represents an arbitrary weight in the network. The j, j,
and j parameters are assumed to adapt as the network learns. The use of multiple adaptive Gaussians allows
the implementation of "soft-weight sharing," in which the learning algorithm decides for itself which
weights should be tied together. If the Gaussians all start with high variance, the initial grouping of weights
into subsets will be very soft. As the network learns and the variance shrinks, those groupings become more
and more distinct and converge to subsets influenced by the task being learned.

For gradient descent-based adaptation, one may employ the partial derivatives

, (5.2.25)

, (5.2.26)

, (5.2.27)

and

(5.2.28)

with

(5.2.29)

It should be noted that the derivation of the partial of R with respect to the mixing proportions is less
straight forward than those in Equations (5.2.25) through (5.2.27) since we must maintain the sum of the j's
to 1. Thus, the result in Equation (5.2.28) has been obtained by appropriate use of a Lagrange multiplier
method and a bit of algebraic manipulation. The term rj(wi) in Equations (5.2.25) through (5.2.29) is the
posterior probability of Gaussian j given weight wi; i.e., it measures the responsibility of Gaussian j for the
ith weight. Equation (5.2.25) attempts to pull the weights towards the center of the "responsible" Gaussian.
It realizes a competition mechanism among the various Gaussians for taking on responsibility of weight wi.
The partial derivative for j drives j toward the weighted average of the set of weights for which Gaussian j is
responsible. Similarly, one may come up with simple interpretations for the derivatives in Equations
(5.2.27) and (5.2.28). To summarize, the penalty term in (5.2.24) leads to unsupervised clustering of
weights (weight sharing) driven by the biases in the training set.

5.3 Applications

Backprop is byfar the most popular supervised learningmethod for multilayer neural networks. Backprop
and its variationshave been applied to a wide variety of problems including patternrecognition, signal
processing, image compression, speech recognition,medical diagnosis, prediction, nonlinear system
modeling, andcontrol.

The most appealing feature of backprop is its adaptivenature which allows complex processes to be
modeled through learningfrom measurements or examples. This method does not require theknowledge of
specific mathematical models for or expert knowledgeof the problem being solved. The purpose of this
section is togive the reader a flavor of the various areas of application ofbackprop and to illustrate some
strategies that might be usedto enhance the training process on some nontrivial real-worldproblems.

5.3.1 NETtalk

One of the earliest applications of backprop wasto train a network to convert English text into speech (Fels
and Hinton, 1993). Here, a bank of five feedforward neural networks with singlehidden layers and backprop
training is used to map sensor signalsgenerated by a data glove to appropriate commands (words), whichin
turn are sent to a speech synthesizer which then speaks theword. A block diagram of the Glove-Talk system
is shown in Figure5.3.2. The hand-gesture data generated by the data glove consistof 16 parameters
representing x, y, z, roll, pitch, andyaw of the hand relative to a fixed reference and ten finger flexangles.
These parameters are measured every 1/60th second.
Figure 5.3.2. weightsharing"interconnections are used between the inputs and layer H1 andbetween layers
H1 and H2. The network is represented in Figure5.3.5.

Weight sharing refers to having several connectionscontrolled by a single weight (feature maps,
eachcontaining 4 by 4 units. The connection scheme between layersH1 and H2 is quite similar to the one
described above betweenH1 and the input layer, but with slightly more complications dueto the multiple 8
by 8 feature maps in H1. Each unit in H2 receivesinput from a subset (eight in this case) of the 12 maps in
H1. Its receptive field is composed of a 5 by 5 window centered atidentical positions within each of the
selected subset maps inH1. Again, all units in a given map in H2 share their weights.

As a result of the above structure, the networkhas 1256 units, 64,660 connections, and 9,760 independent
weights. All units use a hyperbolic tangent activation function. Beforetraining, the weights were initialized
with a uniform randomdistributionbetween −2.4and +2.4 and further normalized by dividing each weight by
thefan-in of the corresponding unit.

Backprop based on the approximate Newton method(described in Section 5.2.3) was employed in an
incremental mode. The network was trained for 23 cycles through the training set(which required 3 days of
CPU time on a Sun SparcStation 1). The percentage of misclassified patterns was 0.14% on the trainingset
and 5.0% on the test set. Another performance test was performedemploying a rejection criterion where an
input pattern was rejectedif the levels of the two most active units in the output layerexceeded a given
threshold. For this given rejection threshold,the network classification error on the test patterns was
reducedto 1%, but resulted in a 12% rejection rate. Additional weightpruning based on information
theoretic ideas in a four hiddenlayer architecture similar to the one described above resultedin a network
with only about as manyfree parameters as that described above, and improved performanceto 99%
generalization error with a rejection rate of only 9% (Martin and Pittman (1991). Digits were automatically
presegmented and size normalized toa 15 by 24 gray scale array, with pixel values from 0 to 1.0. A total of
35,200 samples were available for training and another4000 samples for testing. Here, various nets were
trained usingbackprop to error rates of 2 - 3%. All nets had two hidden layersand ten units in their output
layers, which employed 1-out-of-10encoding. Three types of networks were trained. Global
fullyinterconnected nets with 150 units in the first hidden layer and50 units in the second layer; Local nets
with 540 units in thefirst hidden layer receiving input from 5 by 8 local and overlappingregions (offset by 2
pixels) on the input array. These hiddenunits were fully interconnected to 100 units in the second
hiddenlayer which, in turn, were fully interconnected to units in theoutput layer. Finally, shared weight nets
were also used whichhad approximately the same number of units in each layer as inthe local nets. These
shared weight nets employed a weight sharingstrategy, similar to the one in Figure 5.3.5, between the
inputand the first hidden layer and between the first and second hiddenlayers. Full interconnectivity was
assumed between the secondhidden layer and the output layer.

With the full 35,200 samples training set, and witha rejection rate of 9.6%, the generalization error was
1.7%, 1.1%,and 1.7%, for the global, local, and local shared weights nets,respectively. When the size of the
training set was reduced tothe 1000 to 4000 range, the local shared weights net (with about6,500
independent weights) were substantially better than theglobal (at 63,000 independent weights) and local (at
approximately79,000 independent weights) nets. All these results suggest anotherway for achieving good
Rumelhart, 1989; Pomerleau, 1991). It is anexample of a successfulapplication using sensor data in real
time to perform a real-worldperception task. Using a real-time learning technique, ALVINNquickly learned
to autonomously control the van by observing thereactions of a human driver.

ALVINN's architecture consists of a single hiddenlayer fully interconnected feedforward net with 5
sigmoidal unitsin the hidden layer and 30 linear output units. The input isa 30 by 32 pattern reduced from
the image of an on board camera. The steering direction generated by the network is taken to bethe center of
mass of the activity pattern generated by the outputunits. This allows finer steering corrections, as
compared tousing the most active output unit.

During the training phase, the network is presentedwith road images as inputs and the corresponding
steering signalgenerated by the human driver as the desired output. Backproptraining is used with a constant
learning rate for each weightthat is scaled by the fan-in of the unit to which the weight projects. A steadily
increasing momentum coefficient is also used duringtraining. The desired steering angle is presented to the
networkas a Gaussian distribution of activation centered around the steeringdirection that will keep the

vehicle centered on the road. Thedesired activation pattern was generated as where dlrepresents
the desired output for unit l and Dlis the lth unit's distance from the correct steering directionpoint along the
output vector. The variance 10 was determinedempirically. The Gaussian target pattern makes the learning
taskeasier than a "1-of-30" binary target pattern sinceslightly different road images require the network to
respondwith only slightly different output vectors.

Since the human driver tends to steer the vehicledown the center of the road, the network will not be
presentedwith enough situations where it must recover from misalignmenterrors. A second problem may
arise such that when training thenetwork with only the current image of the road, one runs therisk of over
learning from repetitive inputs; thus causing thenetwork to "forget" what it had learned from earliertraining.

These two problems are handled by ALVINN as follows. First, each input image is laterally shifted to
create 14 additionalimages in which the vehicle appears to be shifted by various amountsrelative to the road
center. These images are shown in Figure5.3.6. A correct steering direction is then generated and usedas the
desired target for each of the shifted images. Secondly,in order to eliminate the problem of over training on
repetitiveimages, each training cycle consisted of a pass through a bufferof 200 images which includes the
current original image and its14 shifted versions. After each training cycle, a new road imageand its 14
shifted versions are used to replace 15 patterns fromthe current set of 200 road scenes. Ten of the fifteen
patternsto be replaced are ones with the lowest error. The other fivepatterns are chosen randomly.
Figure 5.3.6. Shifted video images from a singleoriginal video image used to enrich the training set used to
trainALVINN. (From D. A. Pomerleau, 1991, with permission of the MITPress.)

ALVINN requires approximately 50 iterations throughthis dynamically evolving set of 200 patterns to learn
to driveon roads it had been trained to drive on (an on-board Sun-4Workstationtook 5 minutes to do the
training during which a teacher driverdrives at about 4 miles per hour over the test road). In additionto
being able to drive along the same stretch of road it trainedon, ALVINN can also generalize to drive along
parts of the roadit has never encountered, even under a wide variety of weatherconditions. In the retrieval
phase (autonomous driving), thesystem is able to process 25 images per second, allowing it todrive up to
the van's maximum speed of 20 miles per hour (thismaximum speed is due to constraints imposed by the
hydraulic drivesystem.) This speed is over twice the speed of any other sensor-basedautonomous system
that was able to drive this van. Furtherrefinementsof ALVINN can be found in Boundset al., 1988; Yoon et
al.,1989; Baxt, 1990). Inthis section,an illustrative example of a neural network-based medical
diagnosissystem is described which is applied to the diagnosis of coronaryocclusion (Baxt, 1990).

Acute myocardial infarction (coronary occlusion)is an example of a disease which is difficult to diagnose.
Therehave been a number of attempts to automate the diagnosis process. The most promising automated
solution (see Gonzalez andWintz, 1987). In the following, a neural network-based solutionto this problem is
described.

Consider the architecture of a single hidden layerfeedforward neural network shown in Figure 5.3.7. This
networkhas the same number of units in its output layer as inputs, andthe number of hidden units is
assumed to be much smaller thanthe dimension of the input vector. The hidden units are assumedto be of
the bipolar sigmoid type, and the output units are linear. This network is trained on a set of n-dimensional
real-valuedvectors (patterns) xksuch that each xkis mapped to itself at the output layer in an
autoassociativemode. Thus, the network is trained to act as an encoder ofreal-
valuedpatterns. Backprop may be used to learn such a mapping. autoassociative net was
trainedon random 8 8patches of the image using incremental backprop learning. Here,all
pixel values are normalized in the range [-1, +1]. Typically, the learning consisted of 50,000 to
100,000 iterationsat a learning rate of 0.01 and 0.1 for the hidden and output layerweights, respectively.
Figure 5.3.8b shows the reproduced imageby the autoassociative net when tested on the training image. The
reproduced image is quite close (to the eye) to the trainingimage in Figure 5.3.8a; hence the reconstructed
image is of goodquality.

In order to achieve true compression for the purposeof efficient transmission over a digital communication
link, theoutputs of the hidden units must be quantized. Quantization consistsof transforming the outputs of
the hidden units [which are inthe open interval (−1, +1)]to some integer range corresponding to
the number of bits requiredfor transmission. This, effectively, restricts the informationin
the hidden unit outputs to the number of bits used. In general,this transformation should
be designed with care (bottleneck), backprop attempts toextractregularities (significant
features) from the input vectors. Here,the hidden layer, which is also known as the
representation layer,is expected to evolve an internal low-dimensional Cottrell and
Munro, 1988). In this net, the nonlinearity in the hidden units is theoreticallyof no help
(Bourlard and Kamp,1988), and indeed Cottrellet al.(1987) and Cottrell andMunro (1988)
found that the nonlinearityhas little added advantages in their simulations. These
resultsare further supported by Baldiand Hornik (1989) who showed thatif J linear hidden
units are used, the network learns toproject the input onto the subspace spanned by the
first Jprincipal components of the input. Thus, the network's hiddenunits discard as little
information as possible by evolving theirrespective weight vectors to point in the
direction of the input'sprincipal components. This means that autoassociative
backproplearning in a two layer feedforward neural network with linearunits has no
processing capability beyond those of unsupervisedHebbian PCA nets of Section 3.3.5.
[For an application of aHebbian-typePCA net to image compression, the reader is referred
to Kramer, 1991; principal manifolds." These principal manifolds can, in some cases,
serve aslow-dimensionalrepresentations of the data which are more useful than
principalcomponents. A three hidden layer autoassociative net can,theoretically,compute
any continuous mapping from the inputs to the second hiddenlayer (representation layer),
and another mapping from the secondhidden layer to the output layer. Thus, a 3 hidden
layerautoassociativenet (with a linear or nonlinear representation layer) may in
principlebe considered as a universal nonlinear PCA net. However, sucha highly
nonlinear net may be problematic to train by backpropdue to local minima.

Another way of interpreting the above autoassociativefeedforward network is from the point of view of
feature extraction(Kuczewski et al., 1987; Cottrell, 1991 ; encoder subnet of a four hiddenlayer
autoassociativenet is used to supply five dimensional inputs to a feedforwardneural classifier. The classifier
was trained to recognize thegender of a limited set of subjects. Here, the autoassociativenet was first trained
using backprop, with pruning of representationlayer units, to generate a five-dimensional representation
from50-dimensional inputs. The inputs were taken as the first 50principal components of a 64 × 64-pixel, 8-
bitgray scale images, each of which can be considered to be a pointin a 4,096 dimensional "pixel space."
Here, the trainingset is comprised of 160 images of various facial impressionsof 10 male and 10 female
subjects, of which 120 images were usedfor training and 40 for testing. The images are captured by aframe
grabber, and reduced to 64 × 64 pixels byaveraging. Each image is then aligned along the axes of the
eyesand mouth. All images are normalized to have equal brightnessand variance, in order to prevent the use
of first order statisticsfor discrimination. Finally, the grey levels of image pixelsare linearly scaled to the
range [0, 0.8]. The overallencoder/classifiersystem resulted in a 95% correct gender recognition on both
trainingand test sets which was found to be comparable to the recognitionrate of human beings on the same
images.

The high rate of correct classification in the abovesimulation is a clear indication of the "richness"
andsignificance of the representations/feature vectors discoveredby the nonlinear PCA autoassociative net.
For another significantapplication of nonlinear PCA autoassociative nets, the readeris referred to Usui et al.
(1991). A somewhat related recurrentmultilayer autoassociative net for data clustering and
signaldecomposition is presented in Section 6.4.2.

5.4 Extensions of Backprop for Temporal Learning

Up to this point we have been concerned with "static" mapping networks which are trained to produce a
spatial output pattern in response to a particular spatial input pattern. However, in many engineering,
scientific, and economic applications, the need arises to model dynamical processes where a time sequence
is required in response to certain temporal input signal(s). One such example is plant modeling in control
applications. Here, it is desired to capture the dynamics of an unknown plant (usually nonlinear) by
modeling a flexible-structured network that will imitate the plant by adaptively changing its parameters to
track the plant's observable output signals when driven by the same input signals. The resulting model is
referred to as a temporal association network.

Temporal association networks must have a recurrent (as opposed to static) architecture in order to handle
the time dependent nature of associations. Thus, it would be very useful to extend the multilayer
feedforward network and its associated training algorithm(s) (e.g., backprop) into the temporal domain. In
general, this requires a recurrent architecture (nets with feedback connections) and proper associated
learning algorithms.

Two special cases of temporal association networks are sequence reproduction and sequence recognition
networks. For sequence reproduction, a network must be able to generate the rest of a sequence from a part
of that sequence. This is appropriate, for example, for predicting the price trend of a given stock market
from its past history or predicting the future course of a time series from examples. In sequence recognition,
a network produces a spatial pattern or a fixed output in response to a specific input sequence. This is
appropriate, for example, for speech recognition, where the output encodes the word corresponding to the
speech signal. NETtalk and Glove-Talk of Section 5.3 are two other examples of sequence recognition
networks.

In the following, neural net architectures having various degrees of recurrency and their associated learning
methods are introduced which are capable of processing time sequences.

5.4.1 Time-Delay Neural Networks

Consider the time-delay neural network architecture shown in Figure 5.4.1. This maps a finite time

sequence into a single output y (this can also be


generalized for the case when x and/or y are vectors). One may view this neural network as a discrete-time
nonlinear filter (we may also use the borrowed terms finite-duration impulse response (FIR) filter or
nonrecursive filter from the linear filtering literature).

The architecture in Figure 5.4.1 is equivalent to a single hidden layer feedforward neural network receiving
the (m + 1)-dimensional "spatial" pattern x generated by a tapped delay line preprocessor from a temporal
sequence. Thus, if target values for the output unit are specified for various times t, backprop may be used
to train the above network to act as a sequence recognizer.

Figure 5.4.1. A time-delay neural network for one-dimensional input/output signals.

The time-delay neural net has been successfully applied to the problem of speech recognition (e.g., Tank
and Hopfield, 1987; Elamn and Zipser, 1988; Waibel, 1989; Waibel et al., 1989; Lippmann, 1989) and time
series prediction (Lapedes and Farber, 1988; Weigend et al., 1991). Here, we discuss time series prediction
since it captures the spirit of the type of processing done by the time-delay neural net. Given observed
values of the state x of a (nonlinear) dynamical system at discrete times less than t, the goal is to use these

values to accurately predict , where p is some prediction time step into the future (for simplicity,
we assume a one dimensional state x). Clearly, as p increases the quality of the predicted value will degrade
for any predictive method. A method is robust if it can maintain prediction accuracy for a wide range of p
values.

As is normally done in linear signal processing applications (e.g., Widrow and Stearns, 1985), one may use

the tapped delay line nonlinear filter of Figure 5.4.1 as the basis for predicting . Here, a training

set is constructed of pairs , where

. Backprop may now be employed to learn


such a training set. Reported simulation results of this prediction method show comparable or better
performance compared to other non neural network-based techniques (Lapedes and Farber, 1988; Weigend
et al., 1991; Weigend and Gershenfeld, 1993).

Theoretical justification for the above approach is available in the form of a very powerful theorem by
Takens (1981), which states that there exists a functional relation of the form

(5.4.1)

with , as long as the trajectory x(t) evolves towards compact attracting manifolds of
dimension d. This theorem, however, provides no information on the form of g or the value of . The time-
delay neural network approach provides a robust approximation for g in Equation (5.4.1) in the form of the
continuous, adaptive parameter model

(5.4.2)

where a linear activation is assumed for the output unit, and fh is the nonlinear activation of hidden units.

A simple modification to the time-delay net makes it suitable for sequence reproduction. The training
procedure is identical to the one for the above prediction network. However, during retrieval, the output y

[predicting ] is propagated through a single delay element, with the output of this delay element
connected to the input of the time-delay net as is shown in Figure 5.4.2. This sequence reproduction net will

only work if the prediction is very accurate since any error in the predicted signal has a
multiplicative effect due to the iterated scheme employed.
Figure 5.4.2. Sequence reproduction network.

Further generalization of the above ideas can result in a network for temporal association. We present such
modifications in the context of nonlinear dynamical plant identification/modeling of control theory.
Consider the following general nonlinear single input, single output plant described by the difference
equation:

(5.4.3)

where u(t) and x(t) are, respectively, the input and output signals of the plant at time t, g is a nonlinear
function, and . We are interested in training a suitable layered neural network to capture the
dynamics of the plant in Equation (5.4.3), thus modeling the plant. Here, we assume that the order of the
plant is known (m and n are known). The general form of Equation (5.4.3) suggests the use of a time-delay
neural network shown inside the dashed rectangle in Figure 5.4.3. This may also be viewed as a (nonlinear)
recursive filter, termed infinite-duration impulse response (IIR) filter in the linear filtering literature.

using
Figure 5.4.3. A time-delay neural network setup for the identification of a nonlinear plant.

During training, the neural network and the plant receive the same input u(t). The neural network also
receives the plant's output x(t+1) (switch S in the up position in Figure 5.4.3). Backprop can be used to
update the weights of the neural network based on the "static" mapping pairs

for various values of t. This identification scheme is referred to as series-parallel identification model
(Narendra and Parthasarathy, 1990). After training, the neural network with the switch S in the down

position ( is fed back as the input to the top delay line in Figure 5.4.3) will generate (recursively)
an output time sequence in response to an input time sequence. If the training was successful, one would

expect the output to approximate the actual output of the plant, , for the same input
signal u(t) and same initial conditions. Theoretical justifications for the effectiveness of this neural network
identification method can be found in Levin and Narendra (1992).

Narendra and Parthasarathy (1990) reported successful identification of nonlinear plants by time-delay
neural networks similar to the one in Figure 5.4.3. In one of their simulations, the feedforward part of the
neural network consisted of a two hidden layer network with five inputs and a single linear output unit. The
two hidden layers consisted of 20 and 10 units with bipolar sigmoid activations, respectively. This network
was used to identify the unknown plant
(5.4.4)

The inputs to the neural network during training were .


Incremental backprop was used to train the network using a uniformly distributed random input signal
whose amplitude was in the interval [−1, +1]. The training phase consisted of 100,000 training iterations,
which amounts to one training cycle over the random inputs signal u(t), 0 < t 100,000. A learning rate of
0.25 was used. Figure 5.4.4 (a) shows the output of the plant (solid line) and the model (dotted line) for the
input signal

(5.4.5)

It should be noted that in the above simulation no attempt has been made to optimize the network size or to
tune the learning process. For example, Figure 5.4.4 (b) shows simulation results with a single hidden layer
net consisting of twenty bipolar sigmoid activation hidden units. Here, incremental backprop with a
learning rate of 0.25 was used. The training phase consisted of 5 106 iterations. This amounts to 10,000
training cycles over a 500 sample input signal having the same characteristics as described above.

Other learning algorithms may be used for training the time-delay neural network discussed above, some of
which are extensions of algorithms used in classical linear adaptive filtering or adaptive control. Nerrand et
al. (1993) present examples of such algorithms.

(a) (b)

Figure 5.4.4. Identification results for the plant in Equation (5.4.4) a time-delay neural network. Plant

output x(t) (solid line) and neural network output (dotted line) in response to the input signal in
Equation (5.4.5). (a) The network has two hidden layers and is trained with incremental backprop for one
cycle over a 100,000 sample random input signal. (Adapted from K. S. Narendra and K. Parthasarathy,
1990, Identification and Control of Dynamical Systems Containing Neural Networks, IEEE Trans. on
Neural Networks, 1(1), 4-27, ©1990 IEEE.) (b) The network has a single hidden layer and is trained for
10,000 cycles over a 500 sample random input signal.

See how it works interactively

5.4.2 Backpropagation Through Time


In the previous section, a partially recurrent neural network is presented which is capable of temporal
association. In general, however, a fully recurrent neural net is a more appropriate/economic alternative.
Here, individual units may be input units, output units, or both. The desired targets are defined on a set of
arbitrary units at certain predetermined times. Also, arbitrary interconnection patterns between units can
exist. An example of a simple two-unit fully interconnected network is shown in Figure 5.4.5(a). The
network receives an input sequence x(t) at unit 1, and it is desired that the network generates the sequence
d(t) as the output y2(t) of unit 2.

A network which behaves identically to the above simple recurrent net over the time steps t = 1, 2, 3, and 4
is shown in Figure 5.4.5(b). This amounts to unfolding the recurrent network in time (Minsky and Papert,
1969) to arrive at a feedforward layered network. The number of resulting layers is equal to the unfolding
time interval T. This idea is effective when T is small and limits the maximum length of sequences that can
be generated. Here, all units in the recurrent network are duplicated T times, so that a separate unit in the
unfolded network holds the state yi(t) of the equivalent recurrent network at time t. Note that the
connections wij from unit j to unit i in the unfolded network are identical for all layers.

(a) (b)

Figure 5.4.5. (a) A simple recurrent network. (b) A feedforward network generated by unfolding in time the
recurrent net in (a). The two networks are equivalent over the four time steps t = 1, 2, 3, 4.

The resulting unfolded network simplifies the training process of encoding the x(t) d(t) sequence association
since now backprop learning is applicable. However, we should note a couple of things here. First, targets
may be specified for hidden units. Thus, errors at the output of hidden units, and not just the output errors,
must be propagated backward from the layer in which they originate. Secondly, it is important to realize the
constraint that all copies of each weight wij must remain identical across duplicated layers (backprop
normally produces different increments wij for each particular weight copy). A simple solution is to add
together the individual weight changes for all copies of a partial weight wij and then change all such copies
by the total amount. Once trained, a copy of the weights from any layer of the unfolded net are copied into
the recurrent network which, in turn, is used for the temporal association task. Adapting backprop to
training unfolded recurrent neural nets results in the so-called backpropagation through time learning
method (Rumelhart et al., 1986b). There exist relatively few applications of this technique in the literature
(e.g., Rumelhart et al., 1986b; Nowlan, 1988; Nguyen and Widrow, 1989). One reason is its inefficiency in
handling long sequences. Another reason is that other learning methods are able to solve the problem
without the need for unfolding. These methods are treated next. But first, we describe one interesting
application of backpropagation through time: The truck backer-upper problem.
Consider the trailer truck system shown in Figure 5.4.6. The goal is to design a controller which
successfully backs-up the truck so that the back of the trailer designated by coordinates (x, y) ends at (0, 0)
with the trailer perpendicular to the dock (i.e., the trailer angle t is zero), and where only backward
movements of the cab are allowed. The controller receives the observed state x = [x, y, t, c]T (c is the cab
angle) and produces a steering signal (angle) s. It is assumed that the truck backs-up at a constant speed.
The details of the trailer truck kinematics can be found in Miller et al. (1990a). The original application
assumes six state variables, including the position of the back of the cab. However, these two additional
variables may be eliminated if the length of the cab and that of the trailer are given.

Figure 5.4.6. A pictorial representation of the truck backer-upper problem. The objective here is to design a
controller which generates a steering signal s which successfully backs-up the truck so that the back of the
trailer ends at the (0, 0) reference point, with the trailer perpendicular to the dock.

Before the controller is designed, a feedforward single hidden layer neural network is trained, using
backprop, to emulate the truck and trailer kinematics. This is accomplished by training the network on a
large number of backup trajectories (corresponding to random initial trailer truck position configurations),
each consisting of a set of association pairs {[x(k−1)T s(k−1)]T, x(k)} where k = 1, 2, ..., T, and T represents
the number of backup steps till the trailer hits the dock or leaves some predesignated borders of the parking
lot (T depends on the initial state of the truck and the applied steering signal s). The steering signal was
selected randomly during this training process. The general idea for training the emulator is depicted in the
block diagram of Figure 5.4.3 for the identification of nonlinear dynamical systems by a neural network.
However, the tapped delay lines are not needed here because of the kinematic nature of the trailer truck
system. Next, the trained emulator network is used to train the controller. Once trained, the controller is
used to control the real system. The reason for training the controller with the emulator and not with the real
system is justified below.

Figure 5.4.7 shows the controller/emulator system in a retrieval mode. The whole system is recurrent due to
the external feedback loops (actually, the system exhibits partial recurrence since the emulator is a
feedforward network and since it will be assumed that the controller has a feedforward single hidden layer
architecture). The controller and the emulator are labeled C and E, respectively, in the figure. The controller
receives the input vector x(k) and responds with a single output s(k), representing the control signal.
Figure 5.4.7. Controller/emulator retrieval system.

The idea of unfolding in time is applicable here. When initialized with state x(0), the system with the
untrained, randomly initialized controller neural network evolves over T time steps until its state enters a
restricted region (i.e., the trailer hits the borders). Unfolding the controller/emulator neural network T time
steps results in the T-level feedforward network of Figure 5.4.8. This unfolded network has a total of 4T - 1
layers of hidden units. The backpropagation through time technique can now be applied to adapt the
controller weights. The only units with specified desired targets are the three units of the output layer at
level T representing x, y, and t. The desired target vector is the zero vector.

Figure 5.4.8. Unfolded trailer truck controller/emulator network over T time steps.

Once the output layer errors are computed, they are propagated back through the emulator network units
and through the controller network units. Here, only the controller weights are adjusted (with equal
increments for all copies of the same weight as discussed earlier). The need to propagate the error through
the plant block necessitates that a neural network-based plant emulator be used to replace the plant during
training. The trained controller is capable of backing the truck from any initial state, as long as it has
sufficient clearance from the loading dock. Thousands of backups are required to train the controller. It is
helpful (but not necessary) to start the learning with "easy" initial cases and then proceed to train with more
difficult cases. Typical backup trajectories are shown in Figure 5.4.9.

(a)

(b)

(c)
Figure 5.4.9. Typical backup trajectories for the trailer truck which resulted by employing a
backpropagation through time trained controller (a) Initial state, (b) trajectory, and (c) final state. (Courtesy
of Lee Feldkamp and Gint Puskorius of Ford Research Laboratory, Dearborn, Michigan.)

5.4.3 Recurrent Backpropagation

This section presents an extension of backprop to fully recurrent networks where the units are assumed to
have continuously evolving states. The new algorithm is used to encode " spatial" input/output associations
as stable equilibria of the recurrent network; i.e., after training on a set of {xk, dk} pattern pairs, the
presentation of xk is supposed to drive the network's output y(t) towards the fixed attractor state dk. Thus,
the extension here is still restricted to learning " static" mappings as opposed to temporal association;
however, it will serve as the basis for other extensions of backprop to sequence association which are
discussed later in this section. The present extension, usually called recurrent backpropagation, was
proposed independently by Pineda (1987, 1988) and Almeida (1987, 1988).

Consider a recurrent network of N units with outputs yi, connections wij, and activations f (neti). A simple
example (N = 2) of such a network is shown in Figure 5.4.5a. A unit is an input unit if it receives an element

of the input pattern xk. By definition, non-input units will be assigned an input = 0. Output units are

designated as units with prespecified desired outputs . In general, a unit may belong to the set of input
units and the set of output units, simultaneously, or it may be "hidden" in the sense that it is neither an input
nor an output unit. Henceforth, the pattern index k is dropped for convenience.

A biologically as well as electronically motivated choice for the state evolution of unit i is given by (refer to
Equations (4.7.8) and (7.1.19), respectively).

(5.4.6)

where neti represents the total input activity of unit i and −yi simulates natural signal decay. By setting

, one arrives at the equilibrium points y* of the above system, given by:

(5.4.7)

The following is a derivation of a learning rule for the system/network in Equation (5.4.6), which assumes

the existence and asymptotic stability of at least one equilibrium point y*, , in
Equation (5.4.7). This equilibrium point(s) represent the steady-state response of the network. Suppose that
the network has converged to an equilibrium state y* in response to an input x. Then, if neuron i is an output

neuron, it will respond with . This output is compared to the desired response di, resulting in an error
signal Ei. The goal is to adjust the weights of the network in such a way that the state y* ultimately becomes
equal to the desired response d associated with the input x. In other words, our goal is to minimize the error
function

(5.4.8)
with = 0 if unit i is not an output unit. Note that an instantaneous error function is used so that the
resulting weight update rule is incremental in nature. Using gradient descent search to update the weight
wpq gives

(5.4.9)

with given by differentiating Equation (5.4.7) to obtain

(5.4.10)

where ip is the Kronecker delta (ip = 1 if i = p and zero otherwise). Another way of writing Equation
(5.4.10) is

(5.4.11)

where

(5.4.12)

Now, one may solve for by inverting the set of linear equations represented by Equation (5.4.11) and
get

(5.4.13)

where (L−1)ip is the ipth element of the inverse matrix L−1. Hence, substituting Equation (5.4.13) in Equation
(5.4.9) gives the desired learning rule:

(5.4.14)

When the recurrent network is fully connected, then the matrix L is N N and its inversion requires O(N3)
operations using standard matrix inversion methods. Pineda and Almeida independently showed that a more
economical local implementation, utilizing a modified recurrent neural network of the same size as the
original network, is possible. This implementation has O(N2) computational complexity and is usually
called recurrent backpropagation. To see this, consider the summation term in Equation (5.4.14) and define

it as :

(5.4.15)

Then, undoing the matrix inversion in Equation (5.4.15) leads to the set of linear equations for , as
shown by

(5.4.16)

or, substituting for L from Equation (5.4.12), renaming the index p as j, and rearranging terms,

(5.4.17)

This equation can be solved using an analog network of units zi with the dynamics

(5.4.18)

Note that Equation (5.4.17) is satisfied by the equilibria of Equation (5.4.18). Thus, a solution for z*,

, is possible if it is an attractor of the dynamics in Equation (5.4.18). It can be


shown (see Problem 5.4.5) that z* is an attractor of Equation (5.4.18), if y* is an attractor of Equation
(5.4.6).

The similarity between Equations (5.4.18) and (5.4.6) suggests that a recurrent network realization for
computing z* should be possible. In fact, such a network may be arrived at by starting with the original

network and replacing the coupling weight wij from unit j to unit i by from unit i to unit j,

assuming linear activations for all units, setting all inputs to zero, and feeding the error as input to the
ith output unit (of the original network). The resulting network is called the error-propagation network or
the adjoint of the original net. Figure 5.4.10 shows the error-propagation network for the simple recurrent
net given in Figure 5.4.5(a).
Figure 5.4.10. Error-propagation (adjoint) network for the simple recurrent net in Figure 5.4.5(a).

We may now give a brief outline of the recurrent backpropagation learning procedure. An input pattern xk is

presented to the recurrent net and a steady-state solution is computed by iteratively solving Equation

(5.4.6). The steady-state outputs of the net are compared with the target dk to find the output errors .

Then, the 's are computed by iteratively solving Equation (5.4.18). The weights are finally adjusted
using Equation (5.4.14) or its equivalent form

(5.4.19)

where

Next, a new input pattern is presented to the network and the above procedure is repeated, and so on. It
should be noted that the recurrent backpropagation reduces to incremental backprop for the special case of a
net with no feedback.

The above analysis assumed that finite equilibria y* exist and are stable. However, it has been shown
(Simard et al., 1988, 1989) that for any recurrent neural network architecture, there always exists divergent
trajectories for Equation (5.4.6). In practice, though, if the initial weights are chosen to be small enough, the
network almost always converges to a finite stable equilibrium y*.

One potential application of recurrent backpropagation networks is as associative memories (for definition
and details of associative memories, refer to Chapter 7). This is because these networks build attractors dk

which correspond to input/output association patterns . That is, if a noisy and/or incomplete
version of a trained pattern xk is presented as input, it potentially causes the network to eventually converge
to dk. These pattern completion/error correction features are superior to those of feedforward networks
(Almeida, 1987). Other applications of recurrent backpropagation nets can be found in Qian and Sejnowski
(1989) and Barhen et al. (1989).

5.4.4 Time-Dependent Recurrent Backpropagation


The recurrent backpropagation method just discussed can be extended to recurrent networks that produce
time-dependent trajectories. One such extension is the time-dependent recurrent backpropagation method of
Pearlmutter (1989a, b)[See also Werbos (1988), and Sato (1990)]. In Pearlmutter's method, learning is
performed as a gradient descent in the weights of a continuous recurrent network to minimize an error
function E of the temporal trajectory of the states. It can be thought of as an extension of recurrent
backpropagation to dynamic sequences. The following is a brief outline of this algorithm.

Here, we start with a recurrent net with units yi having the dynamics

(5.4.20)

Note that the inputs xi(t) are continuous functions of time. Similarly, each output unit yl has a desired target
signal dl(t) that is also a continuous function of time.

Consider minimizing a criterion E(y), which is some function of the trajectory y(t) for t between 0 and t1.
Since the objective here is to teach the lth output unit to produce the trajectory dl(t) upon the presentation of
x(t), an appropriate criterion (error) functional is

(5.4.21)

which measures the deviation of yl from the function dl. Now, the partial derivatives of E with respect to the
weights may be computed as:

(5.4.22)

where , is the solution to Equation (5.4.20), and zi(t) is the solution of


the dynamical system given by

(5.4.23)

with the boundary condition zi(t1) = 0. Here, Ei(t) is given by di(t) − yi(t) if unit i is an output unit, and zero
otherwise. One may also simultaneously minimize E in the time-constant space by gradient descent,
utilizing
(5.4.24)

Equations (5.4.22) and (5.4.24) may be derived by using a finite difference approximation as in Pearlmutter
(1988). They may also be obtained using the calculus of variations and Lagrange multipliers as in optimal
control theory (Bryson and Denham, 1962).

Using numerical integration (e.g., first order finite difference approximations) one first solves Equation
(5.4.20) for t [0, t1], then set the boundary condition zi(t1) = 0, and integrate the system in Equation (5.4.23)

backward from t1 to 0. Having determined yi(t) and zi(t), we may proceed with computing and
from Equations (5.4.22) and (5.4.24), respectively. Next, the wij's and i are computed from

and , respectively.

Due to its memory requirements and continuous-time nature, time-dependent recurrent backpropagation is
more appropriate as an off-line training method. Some applications of this technique include learning limit
cycles in two-dimensional space (Pearlmutter, 1989a) like the one shown in Figure 5.4.11 and 5.4.12. The
trajectory in Figure 5.4.11(b) and (c) are produced by a network of four hidden units, two output units, and
no input units, after 1,500 and 12,000 learning cycles, respectively. The desired trajectory [d1(t) versus
d2(t)] is the circle in Figure 5.4.11(a). The state space trajectories in Figure 5.4.12 (b) and (c) are generated
by a network with 10 hidden units and two output units, after 3,182 and 20,000 cycles, respectively. The
desired trajectory is shown in Figure 5.4.12(a). This method has also been shown to work well in time
series prediction (Logar et al., 1993). Fang and Sejnowski (1990) reported improved learning speed and
convergence of the above algorithm as the result of allowing independent learning rates for individual
weights in the network (for example, they report a better formed figure "eight" compared to the one in
Figure 5.4.12(c), after only 2000 cycles.)

(a) (b) (c)

Figure 5.4.11. Learning performance of time-dependent recurrent backpropagation: (a) desired trajectory
d1(t) versus d2(t), (b) generated state space trajectory, y1(t) versus y2(t) after 1,500 cycles, and (c) y1(t)
versus y2(t) after 12,000 cycles. (From B. A. Pearlmutter, 1989a, with permission of the MIT Press.)

Finally, an important property of the continuous-time recurrent net described by Equation (5.4.20) should
be noted. It has been shown (Funahashi and Nakamura, 1993) that the output of a sufficiently large
continuous-time recurrent net with hidden units can approximate any continuous state space trajectory to
any desired degree of accuracy. This means that recurrent neural nets are universal approximators of
dynamical systems. Note, however, that this says nothing about the existence of a learning procedure which
will guarantee that any continuous trajectory is learned successfully. What it implies, though, is that the
failure of learning a given continuous trajectory by a sufficiently large recurrent net would be attributed to
the learning algorithm used.

(a) (b) (c)


Figure 5.4.12. Learning the figure "eight" by a time-dependent recurrent backpropagation net. (a) Desired
state space trajectory, (b) generated trajectory after 3,182 cycles, (c) generated trajectory after 20,000
cycles. (From B. A. Pearlmutter, 1989a, with permission of the MIT Press.)

5.4.5 Real-Time Recurrent Learning

Another method that allows sequences to be associated is the real-time recurrent learning (RTRL) method
proposed by Williams and Zipser (1989a, b). This method allows recurrent networks to learn tasks that
require retention of information over time periods having either fixed or indefinite length. RTRL assumes
recurrent nets with discrete-time states that evolve according to

(5.4.25)

A desired target trajectory d(t) is associated with each input trajectory x(t). As before, the quadratic error
measure is used

(5.4.26)

where

Thus, gradient descent on Etotal gives

(5.4.27)

with

(5.4.28)

The partial derivative in Equation (5.4.28) can now be computed from Equation (5.4.25) as

(5.4.29)
Since Equation (5.4.29) relates the derivatives at time t to those at time t−1, we can iterate it forward

(starting from some initial value for ; e.g., zero) and compute at any desired time, while
using Equation (5.4.25) to iteratively update states at each iteration. Each cycle of this algorithm requires
time proportional to N4, where N is the number of units in a fully interconnected net. Instead of using
Equation (5.4.27) to update the weights, it was found (Williams and Zipser, 1989a) that updating the
weights after each time step according to Equation (5.4.28) works well as long as the learning rate ρ is
kept sufficiently small; thus the name real-time recurrent learning. This avoids the need for allocating
memory proportional to the maximum sequence length and leads to simple on-line implementations. The
power of this method was demonstrated through a series of simulations (Williams and Zipser, 1989b). In
one particular simulation, a 12 unit recurrent net learned to detect whether a string of arbitrary length
comprised of left and right parentheses consists entirely of sets of balanced parentheses, by observing only
the action of a Turing machine performing the same task. In some of the simulations, it was found that
learning speed (and sometimes convergence) improved by setting the states of units yl(t) with known targets
to their target values, but only after computing El(t) and the derivatives in Equation (5.4.29). This heuristic
is known as teacher forcing; it helps keep the network closer to the desired trajectory. The reader may refer
to Robinson and Fallside (1988), Rohwer (1990), and Sun et al. (1992) for other methods for learning
sequences in recurrent networks. (The reader is also referred to the Special Issue of the IEEE Transactions
on Neural Networks, volume 5(2), 1994, for further exploration into recurrent neural networks and their
applications.)

5.5 Summary

We started this chapter by deriving backprop; a gradient descent-based learning procedure for minimizing
the sum of squared error criterion function in a feedforward layered network of sigmoidal units. This result
is a natural generalization of the delta learning rule given in Chapter 3 for single layer networks. We also
presented a global descent-based error backpropagation procedure which employs automatic tunneling
through the error function for escaping local minima and converging towards a global minimum.

Various variations to backprop are introduced in order to improve convergence speed, avoid "poor" local
minima, and enhance generalization. These variations include weight initialization methods, autonomous
learning parameters adjustments, and the addition of regularization terms to the error function being
minimized. Theoretical basis for several of these variations is presented.

A number of significant real-world applications are presented where backprop is used to train feedforward
networks for realizing complex mappings between noisy sensory data and the corresponding desired
classifications/actions. These applications include converting a human's hand movement to speech, hand-
written digit recognition, autonomous vehicle control, medical diagnosis, and data compression.

Finally, extensions of the idea of backward error propagation learning to recurrent neural networks are
given which allow for temporal association of time sequences. Time-delay neural networks, which may be
viewed as nonlinear FIR or IIR filters, are shown to be capable of sequence recognition and association by
employing standard backprop training. Backpropagation through time is introduced as a training method for
fully recurrent networks. It employs a trick that allows backprop with weight sharing to be used to train an
unfolded feedforward nonrecurrent version of the original network. Direct training of fully recurrent
networks is also possible. A recurrent backpropagation method for training fully recurrent nets on static
(spatial) associations is presented. This method is also extended to temporal association of continuous-time
sequences (time-dependent recurrent backpropagation). Finally, a method of on-line temporal association of
discrete-time sequences (real-time recurrent learning) is discussed.

Problems

5.1.1 Derive Equations (5.1.11) and (5.1.12).


5.1.2 Derive the backprop learning rule for the first hidden layer (layer directly connected to the input
signal x) in a three layer (two hidden layer) feedforward network. Assume that the first hidden layer has K
units with weights wki and differentiable activations fh1(netk), the second hidden layer has J units with
weights wjk and differentiable activations fh2(netj), and the output layer has L units with weights wlj and
differentiable activations fo(netl).

5.1.3 Consider the neural network in Figure 5.1.1 with full additional connections between the input vector
x and the output layer units. Let the weights of these additional connections be designated as wli (connection
weights between the lth output unit and the ith input signal.) Derive a learning rule for these additional
weights based on gradient descent minimization of the instantaneous SSE criterion function.

5.1.4 Derive the batch backprop rule for the network in Figure 5.1.1

†5.1.5 Use the incremental backprop procedure described in Section 5.1.1 to train a two-layer network with
12 hidden units and a single output unit to learn to distinguish between the class regions in Figure P5.1.5.
Follow a similar training strategy to the one employed in Example 5.1.1. Generate and plot the separating
surfaces learned by the various units in the network. Can you identify the function realized by the output
unit?

Figure P5.1.5. Two-class classification problem.

*† 5.1.6 Derive and implement numerically the global descent backprop learning algorithm for a single
hidden layer feedforward network starting from Equation (5.1.15). Generate a learning curve (as in Figure
5.1.5) for the 4-bit parity problem using incremental backprop, batch backprop, and global descent
backprop. Assume a four hidden units fully interconnected feedforward net with unipolar sigmoid
activation units, and use the same initial weights and learning rates for all learning algorithms (use = 2 and
k = 0.001 for global descent, and experiment with different directions of the perturbation vector w).

5.1.7 Consider the two layer feedforward net in Figure 5.1.1. Assume that we replace the hidden layer

weights wji by nonlinear weights of the form where rji R is a parameter associated with hidden unit
j, and xi is the ith component of the input vector x. It has been shown empirically ( Narayan, 1993) that this
network is capable of faster and more accurate training when the weights and the rji exponents are adapted
as compared to the same network with fixed rji = 0. Derive a learning rule for rji based on incremental
gradient descent minimization of the instantaneous SSE criterion of Equation (5.1.1). Are there any
restrictions on the values of the inputs xi? What would be a reasonable initial value for the rji exponents?
† 5.1.8 Consider the Feigenbaum (1978) chaotic time series generated by the nonlinear iterated (discrete-
time) map

Plot the time series x(t) for t [0, 20], starting from x(0) = 0.2. Construct (by inspection) an optimal net of
the type considered in Problem 5.1.7 which will perfectly model this iterated map (assume zero biases and
linear activation functions with unity slope for all units in the network). Now, vary all exponents and
weights by +1 percent and −2 percent, respectively. Compare the time series predicted by this varied
network to x(t) over the range t [0, 20]. Assume x(0) = 0.2. Note that the output of the net at time t + 1 must
serve as the new input to the net for predicting the time series at t + 2, and so on.

5.2.1 Given a unit with n weights wi uniformly randomly distributed in the range .
Assume that the components xi of the input vector x are randomly and uniformly distributed in the interval

[0, 1]. Show that the random variable has a zero mean and unity standard deviation.

5.2.2 Explain qualitatively the characteristics of the approximate Newton's rule of Equation (5.2.6).

5.2.3 Complete the missing steps in the derivation of Equation (5.2.9).

5.2.4 Derive the activation function slope update rules of Equations (5.2.10) and (5.2.11).

5.2.5 Derive the incremental backprop learning rule starting from the entropy criterion function in Equation
(5.2.16).

5.2.6 Derive the incremental backprop learning rule starting from the Minkowski-r criterion function in
Equation (5.2.19).

5.2.7 Comment on the qualitative characteristics of the Minkowski-r criterion function for negative r.

5.2.8 Derive Equation (5.2.22).

5.2.9 Derive the partial derivatives of R in Equations (5.2.25) through (5.2.28) for the soft weight-sharing
regularization term in Equation (5.2.24). Use the appropriate partial derivatives to solve analytically for the

optimal mixture parameters and , assuming fixed values for the "responsibilities" rj(wi).

5.2.10 Give a qualitative explanation for the effect of adapting the Gaussian mixture parameters j, j, and j on
learning in a feedforward neural net.

5.2.11 Consider the criterion function with entropy regularization ( Kamimura, 1993):
where is a normalized output of hidden unit j, and > 0. Assume the same network
architecture as in Figure 5.1.1 with logistic sigmoid activations for all units and derive backprop based on
this criterion/error function. What are the effects of the entropy regularization term on the hidden layer
activity pattern of the trained net?

*5.2.12 The optimum steepest descent method employs a learning step defined as the smallest
positive root of the equation

Show that the optimal learning step is approximately given by ( Tsypkin, 1971)

† 5.2.13 Repeat the exact same simulation in Figure 5.2.3 but with a 40 hidden unit feedforward net. During
training, use the noise-free training samples as indicated by the small circles in Figure 5.2.3; these samples

have the following x values {−5, −4, −3, −2, −1, , 0, , 1, 2, 3, 4, 6, 8, 10}. By comparing the number of
degrees of freedom of this net to the size of the training set, what would your intuitive conclusions be about
the net's approximation behavior? Does the result of your simulation agree with your intuitive conclusions?
Explain. How would these results be impacted if a noisy data set is used?

† 5.2.14 Repeat the simulations in Figure 5.2.5 using incremental backprop with cross-validation-based
stopping of training. Assume the net to be identical to the one discussed in Section 5.2.6 in conjunction with
Figure 5.2.5. Also, use the same weight initialization and learning parameters. Plot the validation and
training RMS errors on a log-log scale for the first 10,000 cycles, and compare it to Figure 5.2.6. Discuss
the differences. Test the resulting "optimally trained" net on 200 points x, generated uniformly in [−8, 12].

Plot the output of this net versus x and compare it to the actual function being
approximated. Also, compare the output of this net to the one in Figure 5.2.5 (dashed line), and give the
reason(s) for the difference (if any) in performance of the two nets. The following training and validation
sets are to be used in this problem. The training set is the one plotted in Figure 5.2.5. The validation set has
the same noise statistics as for the training set.

Training Set Validation Set


Input Output Input Output
−5.0000 2.6017 −6.0000 2.1932
−4.0000 3.2434 −5.5000 2.5411
−3.0000 2.1778 −4.5000 1.4374
−2.0000 2.1290 −2.5000 2.8382
−1.0000 1.5725 −1.5000 1.7027
−0.5000 −0.4124 −0.7500 0.3688
0.0000 −2.2652 −0.2500 −1.1351
0.3333 −2.6880 0.4000 −2.3758
1.0000 −0.3856 0.8000 −2.5782
2.0000 −0.6755 1.5000 0.2102
3.0000 1.1409 2.5000 −0.3497
4.0000 0.8026 3.5000 1.5792
6.0000 0.9805 5.0000 1.1380
8.0000 1.4563 7.0000 1.9612
10.0000 1.2267 9.0000 0.9381

† 5.2.15 Repeat Problem 5.2.14 using, as your training set, all the available data (i.e., both training and
validation data). Here, cross-validation cannot be used to stop training, since we have no independent (non-
training) data to validate with. One way to help avoid over training in this case, would be to stop at the
training cycle that led to the optimal net in Problem 5.2.14. Does the resulting net generalize better than the
one in Problem 5.2.14? Explain.

5.2.16 Consider the simple neural net in Figure P5.2.16. Assume the hidden unit has an activation function

and that the output unit has a linear activation with unit slope. Show that there exist a
set of real-valued weights {w1, w2, v1, v2} which approximates the discontinuous function

, for all x, a, b, and c R, to any degree of accuracy.

Figure P5.2.16. A neural network for approximating the function .

†5.4.1 Consider the time series generated by the Glass-Mackey ( Mackey and Glass, 1977) discrete-time
equation

Plot the time series x(t) for t [0, 1000] and = 17. When solving the above nonlinear difference delay
equation an initial condition specified by an initial function defined over a strip of width is required.

Experiment with several different initial functions (e.g. , ).

† 5.4.2 Use incremental backprop with sufficiently small learning rates to train the network in Figure 5.4.1

to predict in the Glass-Mackey time series of Problem 5.4.1 (assume = 17). Use a collection of
500 training pairs corresponding to different values of t generated randomly from the time series for

. Assume training pairs of the form


Also, assume 50 hidden units with hyperbolic tangent activation function (set = 1) and use a linear
activation function for the output unit. Plot the training RMS error versus the number of training cycles.

Plot the signal predicted (recursively) by the trained network and compare it to for
t = 0, 6, 12, 18, ..., 1200. Repeat with a two hidden layer net having 30 units in its first hidden layer and 15
units in its second hidden layer (use the learning equation derived in Problem 5.1.2 to train the weights of
the first hidden layer). [For an interesting collection of time series and their prediction, the reader is referred
to the edited volume by Weigend and Gershenfeld (1994)].

† 5.4.3 Employ the series-parallel identification scheme of Section 5.4.1 (refer to Figure 5.4.3) to identify
the nonlinear discrete-time plant (Narendra and Parthasarathy, 1990)

Use a feedforward neural network having 20 hyperbolic tangent activation (set = 1) units in its hidden
layer, feeding into a linear output unit. Use incremental backprop, with sufficiently small learning rates, to
train the network. Assume the outputs of the delay lines (inputs to neural network in Figure 5.4.3) to be x(t)
and u(t). Also, assume uniform random inputs in the interval [−2, +2] during training. Plot the output of the
plant as well as the recursively generated output of the identification model for the input

5.4.4 Derive Equations (5.4.10) and (5.4.13).

5.4.5 Show that if the state y* is a locally asymptotically stable equilibrium of the dynamics in Equation
(5.4.6), then the state z* satisfying Equation (5.4.17) is a locally asymptotically stable equilibrium of the
dynamics in Equation (5.4.18). (Hint: Start by showing that linearizing the dynamical equations about their
respective equilibria gives

and

where and are small perturbations added to and , respectively.)

* 5.4.6 Derive Equations (5.4.22) and (5.4.24). (See Pearlmutter (1988) for help).

†5.4.7 Employ time-dependent recurrent backpropagation learning to generate the trajectories shown in
Figures 5.4.11 (a) and 5.4.12 (a).
5.4.8 Show that the RTRL method applied to a fully recurrent network of N units has
O(N4) computational complexity for each learning iteration.
6. Adaptive Multilayer Neural Networks II
6.0 Introduction

The previous chapter concentrated on multilayer architectures with sigmoidal type units, both static
(feedforward) and dynamic. The present chapter introduces several additional adaptive multilayer networks
and their associated training procedures, as well as some variations. The majority of the networks
considered here employ processing units which are not necessarily sigmoidal. A common feature in these
networks is their fast training as compared to the backprop networks of the previous chapter. The
mechanisms leading to such increased training speed are emphasized.

All networks discussed in this chapter differ in one or more significant ways from those in the previous
chapter. One group of networks employs units with localized receptive fields, where units receiving direct
input from input signals (patterns) can only "see" a part of the input pattern. Examples of such networks are
the radial basis function network and the cerebellar model articulation controller.

A second group of networks employs resource allocation. These networks are capable of allocating units as
needed during training. This feature enables the network size to be determined dynamically and eliminates
the need for guessing the proper network size. This resource allocating scheme is also shown to be the
primary reason for efficient training. Examples of networks in this group are hyperspherical classifiers and
the cascade-correlation network.

The above two groups of networks mainly employ supervised learning. Some of these networks may be
used as function interpolators/approximators, while others are best suited for classification tasks. The third
and last group of adaptive multilayer networks treated in this chapter has the capability of unsupervised
learning or clustering. Here, two specific clustering nets are discussed: The ART1 network and the
autoassociative clustering network.

Throughout this Chapter, fundamental similarities and differences among the various networks are stressed.
In addition, significant extensions of the above networks are pointed out and the effects of these extensions
on performance are discussed.

6.1 Radial Basis Function (RBF) Networks

In this section, an artificial neural network model motivated by "locally-tuned" response biological neurons
is described. Neurons with locally-tuned response characteristics can be found in many parts of biological
nervous systems. These nerve cells have response characteristics which are "selective" for some finite range
of the input signal space. The cochlear stereocilia cells, for example, have a locally-tuned response to
frequency which is a consequence of their biophysical properties. The present model is also motivated by
earlier work on radial basis functions (Medgassy, 1961) which are utilized for interpolation (Micchelli,
1986 ; Powell, 1987), probability density estimation (Parzen, 1962 ; Duda and Hart, 1973 ; Specht, 1990),
and approximations of smooth multivariate functions (Poggio and Girosi, 1989). The model is commonly
referred to as the radial basis function (RBF) network.

The most important feature that distinguishes the RBF network from earlier radial basis function-based
models is its adaptive nature which generally allows it to utilize a relatively smaller number of locally-tuned
units (RBF's). RBF networks were independently proposed by Broomhead and Lowe (1988) , Lee and Kil
(1988), Niranjan and Fallside (1988), and Moody and Darken (1989a, 1989b). Similar schemes were also
suggested by Hanson and Burr (1987), Lapedes and Farber (1987), Casdagli (1989), Poggio and Girosi
(1990b), and others. The following is a description of the basic RBF network architecture and its associated
training algorithm.

The RBF network has a feedforward structure consisting of a single hidden layer of J locally-tuned units
which are fully interconnected to an output layer of L linear units as shown in Figure 6.1.1.

Figure 6.1.1. A radial basis function neural network consisting of a single hidden layer of locally-tuned
units which is fully interconnected to an output layer of linear units. For clarity, only hidden to output layer
connections for the lth output unit are shown.

All hidden units simultaneously receive the n-dimensional real-valued input vector x. Notice the absence of
hidden layer weights in Figure 6.1.1. This is because the hidden unit outputs are not calculated using the
weighted-sum/sigmoidal activation mechanism as in the previous chapter. Rather, here, each hidden unit
output zj is obtained by calculating the "closeness" of the input x to an n-dimensional parameter vector j
associated with the jth hidden unit. Here, the response characteristics of the jth hidden unit are given by:

(6.1.1)

where K is a strictly positive radially-symmetric function (kernel) with a unique maximum at its "center" j
and which drops off rapidly to zero away from the center. The parameter j is the "width" of the receptive
field in the input space for unit j. This implies that zj has an appreciable value only when the "distance"

is smaller than the width j. Given an input vector x, the output of the RBF network is the L-
dimensional activity vector y whose lth component is given by:

(6.1.2)

It is interesting to note here that for L = 1 the mapping in Equation (6.1.2) is similar in form to that
employed by a PTG, as in Equation (1.4.1). However, in the RBF net, a choice is made to use radially
symmetric kernels as "hidden units" as opposed to monomials.
RBF networks are best suited for approximating continuous or piecewise continuous real-valued mappings

where n is sufficiently small; these approximation problems include classification problems


as a special case. According to Equations (6.1.1) and (6.1.2), the RBF network may be viewed as
approximating a desired function f(x) by superposition of non-orthogonal bell-shaped basis functions. The
degree of accuracy can be controlled by three parameters: The number of basis functions used, their
location, and their width. In fact, like feedforward neural networks with a single hidden layer of sigmoidal
units, it can be shown that RBF networks are universal approximators (Poggio and Girosi, 1989; Hartman et
al., 1990 ; Baldi, 1991; Park and Sandberg, 1991, 1993)

A special but commonly used RBF network assumes a Gaussian basis function for the hidden units:

(6.1.3)

where j and j are the standard deviation and mean of the jth unit receptive field, respectively, and the norm
is the Euclidean norm. Another possible choice for the basis function is the logistic function of the form:

(6.1.4)

where j is an adjustable bias. In fact, with the basis function in Equation (6.1.4), the only difference between
a RBF network and a feedforward neural network with a single hidden layer of sigmoidal units is the
similarity computation performed by the hidden units. If we think of j as the parameter (weight) vector
associated with the jth hidden unit, then it is easy to see that an RBF network can be obtained from a single
hidden layer neural network with unipolar sigmoid-type units and linear output units (like the one in Figure
5.1.1) by simply replacing the jth hidden unit weighted-sum netj = xTj by the negative of the normalized

Euclidean distance . On the other hand, the use of the Gaussian basis function in Equation (6.1.3)
leads to hidden units with Gaussian-type activation functions and with a Euclidean distance similarity
computation. In this case, no bias is needed.

Next, we turn our attention to the training of RBF networks. Consider a training set of m labeled pairs {xi,
di} which represent associations of a given mapping or samples of a continuous multivariate function. Also,
consider the SSE criterion function as an error function E that we desire to minimize over the given training
set. In other words, we would like to develop a training method that minimizes E by adaptively updating the
free parameters of the RBF network. These parameters are the receptive field centers (means j of the hidden
layer Gaussian units), the receptive field widths (standard deviations j), and the output layer weights (wlj).

Because of the differentiable nature of the RBF network's transfer characteristics, one of the first training
methods that comes to mind is a fully supervised gradient descent method over E (Moody and Darken,

1989a ; Poggio and Girosi, 1989). In particular, j, j, and wlj are updated as follows: ,

, and , where , , and w are small positive constants. This method,


although capable of matching or exceeding the performance of backprop trained networks, still gives
training times comparable to those of sigmoidal-type networks (Wettschereck and Dietterich, 1992).
One reason for the slow convergence of the above supervised gradient descent trained RBF network is its
inefficient use of the locally-tuned representation of the hidden layer units. When the hidden unit receptive
fields are narrow, only a small fraction of the total number of units in the network will be activated for a
given input x; the activated units are the ones with centers very close to the input vector in the input space.
Thus, only those units which were activated need be updated for each input presentation. The above
supervised learning method, though, places no restrictions on maintaining small values for j. Thus, the
supervised learning method is not guaranteed to utilize the computational advantages of locality. One way
to rectify this problem is to only use gradient descent-based learning for the basis function centers and use a
method that maintains small values for the j's. Examples of learning methods which take advantage of the
locality property of the hidden units are presented below.

A training strategy that decouples learning at the hidden layer from that at the output layer is possible for
RBF networks due to the local receptive field nature of the hidden units. This strategy has been shown to be
very effective in terms of training speed; though, this advantage is generally offset by reduced
generalization ability, unless a large number of basis functions is used. In the following, we describe
efficient methods for locating the receptive field centers and computing receptive field widths. As for the
output layer weights, once the hidden units are synthesized, these weights can be easily computed using the
delta rule [Equation (5.1.2)]. One may view this computation as finding the proper normalization
coefficients of the basis functions. That is, the weight wlj determines the amount of contribution of the jth
basis function to the lth output of the RBF net.

Several schemes have been suggested to find proper receptive field centers and widths without propagating
the output error back through the network. The idea here is to populate dense regions of the input space
with receptive fields. One method places the centers of the receptive fields according to some coarse lattice
defined over the input space (Broomhead and Lowe, 1988). Assuming a uniform lattice with k divisions
along each dimension of an n-dimensional input space, this lattice would require kn basis functions to cover
the input space. This exponential growth renders this approach impractical for a high dimensional space. An
alternative approach is to center k receptive fields on a set of k randomly chosen training samples. Here,
unless we have prior knowledge about the location of prototype input vectors and/or the regions of the input
space containing meaningful data, a large number of receptive fields would be required to adequately
represent the distribution of the input vectors in a high dimensional space.

Moody and Darken (1989a) employed unsupervised learning of the receptive field centers j in which a
relatively small number of RBF's are used; the adaptive centers learn to represent only the parts of input
space which are richly represented by clusters of data. The adaptive strategy also helps reduce sampling
error since it allows the 's to be determined by a large number of training samples. Here, the k-means
clustering algorithm (MacQueen, 1967; Anderberg, 1973) is used to locate a set of k RBF centers which
represents a local minimum of the SSE between the training set vectors x and the nearest of the k receptive
field centers j (this SSE criterion function is given by Equation (4.6.4) with w replaced by ). In the basic k-
means algorithm, the k RBFs are initially assigned centers j, j = 1, 2, ..., k which are set equal to k randomly
selected training vectors. The remaining training vectors are assigned to class j of the closest center j. Next,
the centers are recomputed as the average of the training vectors in their class. This two step process is
invoked until all centers stop changing. An incremental version of this batch mode process may also be
used which requires no storage of past training vectors or cluster membership information. Here, at each

time step, a random training vector x is selected and the center of the nearest (in a Euclidean distance
sense) receptive field is updated according to:

(6.1.5)

where is a small positive constant. Equation (6.1.5) is the simple competitive rule which we have analyzed
in Section 4.6.1. Similarly, we may use learning vector quantization (LVQ) or one of its variants (see
Section 3.4.2) to effectively locate the k RBF centers (Vogt, 1993). Generally speaking, there is no formal
method for specifying the required number k of hidden units in an RBF network. Cross-validation is
normally used to decide on k.
Once the receptive field centers are found using one of the above methods, their widths can be determined
by one of several heuristics in order to get smooth interpolation. Theoretically speaking, RBF networks
with the same j in each hidden kernel unit have the capability of universal approximation (Park and
Sandberg, 1991). This suggests that we may simply use a single global fixed value for all j's in the network.
In order to preserve the local response characteristics of the hidden units, one should choose a relatively
small (positive) value for this global width parameter. The actual value of for a particular training set may
be found by cross-validation. Empirical results (Moody and Darken, 1989a) suggest that a "good" estimate

for the global width parameter is the average width which represents a global average
over all Euclidean distances between the center of each unit i and that of its nearest neighbor j. Other
heuristics based on local computations may be used which yield individually-tuned widths j. For example,

the width for unit j may be set to the distance where i is the center of the nearest neighbor to
unit j (usually, is taken between 1.0 and 1.5). For classification tasks, one may make use of the category
label of the nearest training vector. If that category label is different from that represented by the current
RBF unit, it would be advisable to use a smaller width which narrows the bell-shaped receptive field of the
current unit. This leads to a sharpening of the class domains and allows for better approximation.

We have already noted that the output layer weights, wlj, can be adaptively computed using the delta rule

(6.1.6)

once the hidden layer parameters are obtained. Here, the term f '(netl) can be dropped for the case of linear
units. Equation (6.1.6) drives the output layer weights to minimize the SSE criterion function [recall
Equation (5.1.13)], for sufficiently small . Alternatively, for the case of linear output units, one may
formulate the problem of computing the weights as a set of simultaneous linear equations and employ the
generalized-inverse method [recall Equation (3.1.42)] to obtain the minimum SSE solution. Without loss of
generality, consider a single output RBF net, and denote by w = [w1 w2 ... wJ]T the weight vector of the
output unit. Now, recalling Equations (3.1.39) through (3.1.42), the minimum SSE solution for the system
of equations ZTw = d is given by (assuming an overdetermined system; i.e., m J)

w* = Z†d = (ZZT)−1Zd (6.1.7)

where Z = [z1 z2 ... zm] is a J × m matrix, and d = [d1 d2 ... dm]T. Here, zi is the output of the hidden layer
for input xi. Therefore, the jith element of matrix Z may be expressed explicitly as

(6.1.8)

with the parameters j and assumed to have been computed using the earlier described methods.

For "strict" interpolation problems, it is desired that an interpolation function be found which is constrained
to "exactly" map the sample points xi into their associated targets di, for i = 1, 2, ..., m. It is well known that
a polynomial with finite order r = m − 1 is capable of performing strict interpolation on m samples {xi, di},
with distinct xi's in Rn (see Problem 1.3.4). A similar result is available for RBF nets. This result states that
there is a class of radial-basis functions which guarantee that an RBF net with m such functions is capable
of strict interpolation of m sample point in Rn (Micchelli, 1986 ; Light, 1992b), the Gaussian function in
Equation (6.1.3) is one example. Furthermore, there is no need to search for the centers j; one can just set
j = xj for j = 1, 2, ..., m. Thus, for strict interpolation, the Z matrix in Equation (6.1.8) becomes the m × m
matrix
(6.1.9)

which we refer to as the interpolation matrix. Note that the appropriate width parameters j still need to be
found; the choice of these parameters affects the interpolation quality of the RBF net.

According to the above discussion, an exact solution w* is assured. This requires Z to be nonsingular.
Hence, w* can be computed as

(6.1.10)

Although in theory Equation (6.1.10) always assures a solution to the strict interpolation problem, in

practice the direct computation of can become ill-conditioned due to the possibility of ZT being
nearly singular. Alternatively, one may resort to Equation (6.1.6) for an adaptive computation of w*.

Receptive field properties play an important role in the quality of an RBF network's approximation
capability. To see this, consider a single input/single output RBF network for approximating a continuous
function f: RR. Approximation error, due to error in the "fit" of the RBF network to that of the target
function f, occurs when the receptive fields (e.g., Gaussians) are either too broad and/or too widely spaced
relative to the fine spatial structure of f. In other words, these factors act to locally limit the high frequency
content of the approximating network. According to Nyquist's sampling criterion, the highest frequency
which may be recovered from a sampled signal is one half the sampling frequency. Therefore, when the
receptive field density is not high enough, the high frequency fine structure in the function being
approximated is lost. The high frequency fine structure of f can also be "blurred" when the receptive fields
are excessively wide. By employing the Taylor series expansion, it can be shown (Hoskins et al., 1993) that
when the width parameter is large, the RBF net exhibits polynomial behavior with an order successively
decreasing as the RBF widths increase. In other words, the net's output approaches that of a polynomial
function whose order is decreasing in . Therefore, it is important that receptive field densities and widths be
chosen to match the frequency transfer characteristics imposed by the function f (Mel and Omohundro,
1991). These results also suggest that even for moderately high dimensional input spaces, a relatively large
number of RBF's must be used if the training data represent high frequency content mappings (functions)
and if low approximation error is desired. These observations can be generalized to the case of RBF
network approximation of multivariate functions.

Example 6.1.1: The following example illustrates the application of the RBF net for approximating the

function (refer to the solid line plot in Figure 6.1.2) from the fifteen noise-free
samples (x j, g(x j)), j = 1, 2, ..., 15, in Figure 6.1.2. We will employ the method of strict interpolation for
designing the RBF net. Hence, 15 Gaussian hidden units are used (all having the same width parameter )
with the jth Gaussian unit having its center j equal to xj. The design is completed by computing the weight
vector w of the output linear unit using Equation (6.1.10). Three designs are generated which correspond to
the values = 0.5, 1.0, and 1.5. We then tested these networks with two hundred inputs x, uniformly sampled
in the interval [−8, 12]. The output of the RBF net is shown in Figure 6.1.2 for = 0.5 (dotted line), = 1.0
(dashed line), and = 1.5 (dotted-dashed line). The value of = 1.0 is close to the average distance among all
15 sample points, and it resulted in better interpolation of g(x) compared to = 0.5 and = 1.5. As expected,
these results show poor extrapolation capabilities by the RBF net, regardless of the value of (check the net
output in Figure 6.1.2 for x > 10 and ). It is interesting to note the excessive overfit by the RBF net
for relatively high (compare to the polynomial-based strict interpolation of the same data shown in Figure
5.2.2). Finally, by comparing the above results to those in Figure 5.2.3 one can see that more accurate
interpolation is possible with sigmoidal hidden unit nets; this is mainly attributed to the ability of
feedforward multilayer sigmoidal unit nets to approximate the first derivative of g(x).
Figure 6.1.2. RBF net approximation of the function (shown as a solid line),
based on strict interpolation using the 15 samples shown (small circles). The RBF net employs 15 Gaussian
hidden units and its output is shown for three hidden unit widths: = 0.5 (dotted line), = 1.0 (dashed line),
and (dotted-dashed line). (Compare these results to those in Figures 5.2.2 and 5.2.3.)

6.1.1 RBF Networks Versus Backprop Networks

RBF networks have been applied with success to function approximation (Broomhead and Lowe, 1988 ;
Lee and Kil, 1988 ; Casdagli, 1989 ; Moody and Darken, 1989a, 1989b) and classification (Niranjan and
Fallside, 1988 ; Nowlan, 1990; Lee, 1991 ; Wettschereck and Dietterich, 1992 ; Vogt, 1993). On difficult
approximation/prediction tasks (e.g., predicting the Glass-Mackey chaotic series of Problem 5.4.1 T time
steps (T > 50) in the future), RBF networks which employ clustering for locating hidden unit receptive field
centers can achieve a performance comparable to backprop networks (backprop-trained feedforward
networks with sigmoidal hidden units), while requiring orders of magnitude less training time than
backprop. However, the RBF network typically requires ten times or more data to achieve the same
accuracy as a backprop network. The accuracy of RBF networks may be further improved if supervised
learning of receptive field centers is used (Wettschereck and Dietterich, 1992) but the speed advantage over
backprop networks is compromised. For difficult classification tasks, RBF networks or their modified
versions (see Section 6.1.2) employing sufficient training data and hidden units can lead to better
classification rates (Wettschereck and Dietterich, 1992) and smaller "false-positive" classification errors
(Lee, 1991) compared to backprop networks. In the following, qualitative arguments are given for the above
simulation-based observations on the performance of RBF and backprop networks.

Some of the reasons for the training speed advantage of RBF networks have been presented earlier in this
section. Basically, since the receptive field representation is well localized, only a small fraction of the
hidden units in an RBF network responds to any particular input vector. This allows the use of efficient
self-organization (clustering) algorithms for adapting such units in a training mode that does not involve the
network's output units. On the other hand, all units in a backprop network must be evaluated and their
weights updated for every input vector. Another important reason for the faster training speed of RBF
networks is the hybrid two-stage training scheme employed, which decouples the learning task for both
hidden and output layers thus eliminating the need for the slow back error propagation.

The RBF network with self-organized receptive fields needs more data and more hidden units to achieve
similar precision to that of the backprop network. When used for function approximation, the backprop
network performs global fit to the training data, whereas the RBF network performs local fit. This results in
greater generalization by the backprop network from each training example. It also utilizes the network's
free parameters more efficiently, which leads to a smaller number of hidden units. Furthermore, the
backprop network is a better candidate net when extrapolation is desired. This is primarily due to the ability
of feedforward nets, with sigmoidal hidden units, to approximate a function and its derivatives (see Section
5.2.5). On the other hand, the local nature of the hidden unit receptive fields in RBF nets prevents them
from being able to "see" beyond the training data. This makes the RBF net a poor extrapolator.

When used as a classifier, the RBF net can lead to low "false-positive" classification rates. This property is
due to the same reason that makes RBF nets poor extrapolators. Regions of the input space which are far
from training vectors are usually mapped to low values by the localized receptive fields of the hidden units.
By contrast, the sigmoidal hidden units in the backprop network can have high output even in regions far
away from those populated by training data. This causes the backprop network/classifier to indicate high
confidence classifications to meaningless inputs. False-positive classification may be reduced in backprop
networks by employing the "training with rubbish" strategy discussed at the end of Section 5.3.3. However,
and when dealing with high dimensional input spaces, this strategy generally requires an excessively large
training set due to the large number of possible "rubbish" pattern combinations.

Which network is better to use for which tasks? The backprop network is better to use when training data is
expensive (or hard to generate) and/or retrieval speed, assuming a serial machine implementation, is critical
(the smaller backprop network size requires less storage and leads to faster retrievals compared to RBF
networks). However, if the data is cheap and plentiful and if on-line training is required (e.g., the case of
adaptive signal processing or adaptive control where data is acquired at a high rate and cannot be saved),
then the RBF network is superior.

6.1.2 RBF Network Variations

In their work on RBF networks, Moody and Darken (1989a) suggested the use of normalized hidden unit
activities according to

(6.1.11)

based on empirical evidence of improved approximation properties. The use of Equation (6.1.11) implies

that for all inputs x; i.e., the unweighted sum of all hidden unit activities in an RBF network
results in the unity function. Here, the RBF network realizes a "partition of unity," which is a desired
mathematical property in function decomposition/approximation (Werntges, 1993); the motivation being
that a superposition of basis functions that can represent the unity function (f(x) = 1) "exactly" would also
suppress spurious structure when fitting a non-trivial function. In other words, the normalization in
Equation (6.1.11) leads to a form of "smoothness" regularization.

Another justification for the normalization of hidden unit outputs may be given based on statistical
arguments. If one interprets zj in Equation (6.1.1) as the probability Pj(xk) of observing xk under Gaussian
distribution j:

(6.1.12)

(where a is a normalization constant and j = for all j) and also assumes that all Gaussians are selected with
equal probability, then the probability of Gaussian j having generated xk, given that we have observed xk, is:

(6.1.13)
Therefore, the normalization in Equation (6.1.11) now has a statistical significance: it represents the
conditional probability of unit j generating xk.

Another variation of RBF networks involves the so-called "soft" competition among Gaussian units for
locating the centers j (Nowlan, 1990). The clustering of the j's according to the incremental k-means
algorithm is equivalent to a "hard" competition winner-take-all operation where, upon the presentation of
input xk, the RBF unit with the highest output zj updates its mean j according to Equation (6.1.5). This in
effect realizes an iterative version of the "approximate" maximum likelihood estimate (Nowlan, 1990):

(6.1.14)

where Sj is the set of exemplars closest to Gaussian j, and Nj is the number of vectors contained in this set.
Rather than using the approximation in Equation (6.1.14), the "exact" maximum likelihood estimate for j is
given by (Nowlan, 1990):

(6.1.15)

where P(j|xk) is given by Equation (6.1.13). In this "soft" competitive model, all hidden unit centers are
updated according to an iterative version of Equation (6.1.15). One drawback of this "soft" clustering
method is the computational requirements in that all j's, rather than the mean of the winner, are updated for
each input. However, the high performance of RBF networks employing "soft" competition may justify this
added training computational cost. For example, consider the classical vowel recognition task of Peterson
and Barney (1952). Here, the data is obtained by spectrographic analysis and consists of the first and second
formant frequencies of 10 vowels contained in words spoken by a total of 67 men, women and children.
The spoken words consisted of 10 monosyllabic words each beginning with the letter "h" and ending with
"d" and differing only in the vowel. The words used to obtain the data were heed, hid, head, had, hud, hod,
heard, hood, who'd, and hawed. This vowel data is randomly split into two sets, resulting in 338 training
examples and 333 test examples. A plot of the test examples is shown in Figure 6.1.3. An RBF network
employing 100 Gaussian hidden units and soft competition for locating the Gaussian means is capable of
87.1 percent correct classification on the 333 example test set of the vowel data after being trained with the
338 training examples (Nowlan, 1990). This performance exceeds the 82.0%, 82.0%, and 80.2%
recognition rates reported for a 100 unit k-means-trained RBF network (Moody and Darken, 1989b), k-
nearest neighbor network (Huang and Lippmann, 1988), and backprop network (Huang and Lippmann,
1988), respectively (the decision boundaries shown in Figure 6.1.3 are those generated by the backprop
network). A related general framework for designing optimal RBF classifiers can be found in Fakhr (1993).

Figure 6.1.3. A plot of the test samples for the 10 vowel problem of Peterson and Barney (1952). The lines
are class boundaries generated by a two layer feedforward net trained with backprop on training samples.
(Adapted from W. Y. Huang and R. P. Lippmann, 1988, with permission of the American Institute of
Physics).

We conclude this section by considering a network of "semilocal activation" hidden units (Hartman and
Keeler, 1991a). This network has been found to retain comparable training speeds to RBF networks, with
the advantages of requiring a smaller number of units to cover high-dimensional input spaces and producing
high approximation accuracy. Semilocal activation networks are particularly advantageous when the
training set has irrelevant input exemplars.

An RBF unit responds to a localized region of the input space. Figure 6.1.4 (a) shows the response of a two
input Gaussian RBF. On the other hand, a sigmoid unit responds to a semi-infinite region by partitioning
the input space with a "sigmoidal" hypersurface, as shown in Figure 6.1.4 (b). RBF's have greater flexibility
in discriminating finite regions of the input space, but this comes at the expense of a great increase in the
number of required units. To overcome this tradeoff, "Gaussian-bar" units with the response depicted in
Figure 6.1.4 (c) may be used to replace the RBF's. Analytically, the output of the jth Gaussian-bar unit is
given by:

(6.1.16)

where i indexes the input dimension and wji is a positive parameter signifying the ith weight of the jth
hidden unit. For comparison purposes, we write the Gaussian RBF as a product

(6.1.17)

According to Equation (6.1.16), the Gaussian-bar unit responds if any of the i Gaussians is activated
(assuming the scaling factors wji are non-zero) while a Gaussian RBF requires all component Gaussians to
be activated. Thus a Gaussian-bar unit is more like an "ORing" device and a pure Gaussian is more like an
"ANDing" device. Note that a Gaussian-bar network has significantly more free parameters to adjust
compared to a Gaussian RBF network of the same size (number of units). The output units in a Gaussian-
bar network can be linear or Gaussian-bar.

Because of their semilocal receptive fields, the centers j of the hidden units cannot be determined effectively
using competitive learning as in RBF networks. Therefore, supervised gradient descent-based learning is
normally used to update all network parameters.

(a) (b) (c)

Figure 6.1.4. Response characteristics for two-input (a) Gaussian, (b) sigmoid, and (c) Gaussian-bar units.

Since the above Gaussian-bar network employs parameter update equations which are non-linear in their
parameters, one might suspect that the training speed of such a network is compromised. However, on the
contrary, simulations involving difficult function prediction tasks have shown that training Gaussian-bar
networks is significantly faster than sigmoid networks and slower but of the same order as RBF networks.
One possible explanation for the training speed of Gaussian-bar networks could be their built-in automatic
dynamic reduction of the network architecture (Hartman and Keeler, 1991b), as explained next.
A Gaussian-bar unit can effectively "prune" input dimension i by one of the following mechanisms: wji
becoming zero, ji moving away from the data, and/or ji shrinking to a very small value. These mechanisms
can occur completely independently for each input dimension. On the other hand, moving any one of the ji's
away from the data or shrinking ji to zero deactivates a Gaussian unit completely. Sigmoid units may also
be pruned (according to the techniques of Section 5.2.5) but such pruning is limited to synaptic weights.
Therefore, Gaussian-bar networks have greater pruning flexibility than sigmoid or Gaussian RBF networks.
Training time could also be reduced by monitoring pruned units and excluding them from the calculations.
Since pruning may lead to very small ji's which, in turn, create a spike response at ji, it is desirable to move
such ji to a location far away from the data in order to eliminate the danger of these spikes on
generalization. Here, one may avoid this danger, reduce storage requirements, and increase retrieval speed
by postprocessing trained networks to remove the pruned units of the network.

Many other versions of RBF networks can be found in the literature (see Moody, 1989; Jones et al., 1990 ;
Saha and Keeler, 1990 ; Bishop, 1991 ; Kadirkamanathan et al., 1991 ; Mel and Omohundro, 1991 ; Platt,
1991 ; Musavi et al., 1992 ; Wettschereck and Dietterich, 1992 ; Lay and Hwang, 1993). Roy and Govil
(1993) presented a method based on linear programming models which simultaneously adds RBF units and
trains the RBF network in polynomial time for classification tasks. This training method is described in
detail in Section 6.3.1 in connection with a hyperspherical classifier net similar to the RBF network.

6.2 Cerebellar Model Articulation Controller (CMAC)

Another neural network model which utilizes hidden units with localized receptive fields and which allows
for efficient supervised training is the cerebellar model articulation controller (CMAC). This network was
developed by Albus (1971) as a model of the cerebellum and was later applied to the control of robot
manipulators (Albus, 1975; 1979; 1981). A similar model for the cerebellum was also independently
developed by Marr (1969).

There exist many variants of and extensions to Albus's CMAC. In this section, the CMAC version reported
by Miller et al. (1990c) is described. The CMAC consists of two mappings (processing stages). The first is
a nonlinear transformation which maps the network input x Rn into a higher dimensional vector

. The vector z is a sparse vector in which at most c of its components are non-zero (c is called
a generalization parameter and is user specified). The second mapping generates the CMAC output y RL
through a linear matrix-vector product Wz, where W is an L J matrix of modifiable real-valued weights.

The CMAC has a built-in capability of local generalization: Two similar (in terms of Euclidean distance)

inputs x and are mapped by the first mapping stage to similar binary vectors z and , respectively,
while dissimilar inputs map into dissimilar vectors. In addition to this local generalization feature, the first
mapping transforms the n-dimensional input vector x into a J-dimensional binary vector z, with J >> n.

This mapping is realized as a cascade of three layers: A layer of input sensor units that feeds its binary
outputs to a layer of logical AND units which, in turn, is sparsely interconnected to a layer of logical OR
units. The output of the OR layer is the vector z. Figure 6.2.1 shows a schematic diagram of the CMAC
with the first processing stage (mapping) shown inside the dashed rectangle. The specific interconnection
patterns between adjacent layers of the CMAC are considered next.
Figure 6.2.1. Schematic illustration of a CMAC for a two-dimensional input.

In addition to supplying the generalization parameter c, the user must also specify a discretization of the
input space. Each component of the input vector x is fed to a series of sensor units with overlapping
receptive fields. Each sensor unit produces a '1' if the input falls within its receptive field and is '0'
otherwise. The width of the receptive field of each sensor controls input generalization, while the offset of
the adjacent fields controls input quantization. The ratio of receptive field width to receptive field offset
defines the generalization parameter c.

The binary outputs of the sensor units are fed into the layer of logical AND units. Each AND unit receives
an input from a group of n sensors, each sensor corresponds to one distinct input variable, and thus the
unit's input receptive field is the interior of a hypercube in the input hyperspace (the interior of a square in
the two-dimensional input space of Figure 6.2.1). The AND units are divided into c subsets. The receptive
fields of the sensor units connected to each of the subsets are organized so as to span the input space
without overlap. Each input vector excites one AND unit from each subset, for a total of c excited units for
any input. There exists many ways of organizing the receptive fields of the individual subsets which
produce the above excitation pattern (e.g., Miller et al., 1990c; Parks and Militzer, 1991; Lane et al., 1992).
Miller et al. employ an organization scheme similar to Albus's original scheme where each of the subsets of
AND units is identical in its receptive field organization, but each subset is offset relative to the others
along hyperdiagonals in the input hyperspace. Here, adjacent subsets are offset by the quantization level of
each input.

The resulting number of AND units (also called state-space detectors) resulting from the above organization
can be very large for many practical problems. For example, a system with 10 inputs, each quantized into
100 different levels, would have 10010 = 1020 vectors (points) in its input space, and would require a
correspondingly large number of AND units! However, most practical problems do not involve the whole
input space; most of the possible input vectors would never be encountered. Therefore, one can
significantly reduce the size of the adaptive output layer and hence reduce storage requirements and training
time by transforming the binary output vector generated by the AND layer into a lower dimensional vector
z (however, the dimension of z is still much larger than n). This is accomplished in the CMAC by randomly
connecting the AND unit outputs to a smaller set of OR units, as shown in Figure 6.2.1. Since exactly c
AND units are excited by any input, at most c OR units will be excited by any input. This leads to a highly
sparse vector z.
The final output of the CMAC is generated by multiplying the vector z by the weight matrix of the output
layer. The lth row of this weight matrix (corresponding to the lth output unit) is adaptively and
independently adjusted (e.g., using LMS learning rule) in order to approximate a given function fl(x)
implemented by the lth output unit. We may also use the CMAC as a classifier by adding nonlinear
activations such as sigmoids or threshold activation functions to the output units and employing the delta
rule or the perceptron rule (or adaptive Ho-Kashyap rule), respectively. The high degree of sparsity of the
vector z typically leads to fast learning. Additional details on the learning behavior of the CMAC can be
found in Wong and Sideris (1992).

Because of its intrinsic local generalization property, the CMAC can most successfully approximate
functions that are slowly varying. The CMAC will fail to approximate functions that oscillate rapidly or
which are highly nonlinear (Cotter and Guillerm, 1992; Brown et al., 1993). Thus, the CMAC does not have
universal approximation capabilities like those of multilayer feedforward nets of sigmoid units or RBF nets.

One appealing feature of the CMAC is its efficient realization in software in terms of training time and real-
time operation. The CMAC also has practical hardware realizations using logic cell arrays in VLSI
technology (Miller et al., 1990b). Examples of applications of CMAC include real-time robotics (Miller et
al., 1990d), pattern recognition (Glanz and Miller, 1987), and signal processing (Glanz and Miller, 1989).

6.2.1 CMAC Relation to Rosenblatt's Perceptron and Other Models

One of the earliest adaptive artificial neural network models is Rosenblatt's perceptron (Rosenblatt, 1961).
In its most basic form, this model consists of a hidden layer of a large number of units, which computes
random Boolean functions, connected to an output layer of one or more LTGs, as illustrated in Figure 6.2.2.
(Historically speaking, the term perceptron was originally coined for the architecture of Figure 6.2.2 or its
variants. However, in the current literature, the term perceptron is usually used to refer to the unit in Figure
3.1.1 or 3.1.4).

In Rosenblatt's perceptron, each hidden unit is restricted to receive a small number of inputs relative to the
total number of inputs in a given input pattern. Here, the input pattern is typically assumed to be a two-
dimensional binary image formed on a "retina". As shown in Figure 6.2.2, each hidden unit "sees" only a
small piece of the binary input image. The idea here is that some of the many random hidden units might
get "lucky" and detect "critical features" of the input image, thus allowing the output LTG(s) to perfectly
classify the input image, after being trained with the perceptron rule. Ultimately, the hidden random
Boolean units are intended to map a given nonlinearly separable training set of patterns onto vectors
z {0, 1}J of a high-dimensional feature space such that the training set becomes linearly separable. Note
that these ideas are similar to those discussed in relation to polynomial threshold gates (PTGs) in Chapter
One (refer to Figure 1.3.4). The basic difference here is that a binary input PTG employs AND gates as its
hidden units, as opposed to the potentially more powerful Boolean units employed in Rosenblatt's
perceptron. Another important difference is that a PTG allows one of the hidden AND units to "see" (cover)
the whole input pattern, with the other AND units covering substantial portions of the input.
Figure 6.2.2. Rosenblatt's perceptron.

Rosenblatt's perceptron also has common features with the CMAC model discussed above. Both models
restrict the amount of input information seen by each hidden unit, employ a nonlinear Boolean
transformation via the hidden units, and allow for adaptive computation of the output layer weights. If we
take a second look at the CMAC architecture in Figure 6.2.1, we see that the first mapping (represented by
the circuitry inside the dashed rectangle) is realized by a Boolean AND-OR network. Here, the jth OR unit
and the set of AND units feeding into it can be thought of as generating a random Boolean function and
may be compared to a random Boolean unit in Figure 6.2.2. The Boolean functions zj in the CMAC acquire
their random nature from the random interconnectivity pattern assumed between the two layers of AND and
OR units. However, and due to the sparsity of connections between these two layers, the class of Boolean
functions realized by the zj's does not have the richness nor diversity of the uniformly random Boolean
functions zj's in Rosenblatt's perceptron. These two models also have a minor difference in that the CMAC
normally uses linear output units with LMS training, while the perceptron uses LTGs trained with the
perceptron rule. This difference is due to the different nature of intended applications for each model: The
CMAC is primarily used as a continuous, smooth function approximator whereas Rosenblatt's perceptron
was originally intended as a pattern classifier. One may also note the more structured receptive field
organization in the CMAC compared to the perceptron. Later in this section, we will see that Rosenblatt's
perceptron does not have the intrinsic local generalization feature of the CMAC.

The non-universality of the CMAC is also shared by the perceptron of Rosenblatt. This limitation is due to
the localized nature of the hidden unit receptive fields defined on the input image. For example, it has been
shown (Minsky and Papert, 1969) that this particular perceptron model cannot determine whether or not all
the parts of its input image (geometric figure) are connected to one another, nor it can determine whether or
not the number of 'on' pixels in a finite input image is odd. The later task is equivalent to the parity problem
and can only be solved by Rosenblatt's perceptron if at least one hidden unit is allowed to have its receptive
field span the entire input image.

The limitations of Rosenblatt's model can be relaxed by allowing every hidden unit to see all inputs.
However, this becomes impractical when the dimension n of the input is large since there would be
possible Boolean functions for each hidden random Boolean unit to choose from. Therefore, there is very
little chance for the hidden units to randomly become a detector of "critical features," unless we start with
an exponentially large number of hidden units. But this requirement renders the model impractical.

Yet another weakness of Rosenblatt's perceptron is that it is not "robustness-preserving." In other words, it
does not allow for good local generalization. To see this, consider two similar unipolar binary input patterns
(vectors) x' and x". Here, we use the similarity measure of the normalized Hamming distance Dx between
the two input patterns

(6.2.1)

If x' is similar to x", then D(x', x") is much smaller than 1. Now, because of the uniform random nature of
the hidden Boolean units, the output of any hidden unit zj is one (or zero) with a probability of 0.5
regardless of the input. Thus, the activation patterns z at the output of the hidden layer are completely
uncorrelated. In particular, the normalized Hamming distance Dz between the two z vectors corresponding

to any two input vectors is approximately equal to . Therefore, this model is not robustness-preserving.

Gallant and Smith (1987) and independently Hassoun (1988) have proposed a practical classifier model
inspired by Rosenblatt's perceptron, but which solves some of the problems associated with Rosenblatt's
model. This model, Gallant-Smith-Hassoun (GSH) model, is also similar to a version of an early model
studied by Gamba (1961), and referred to as the Gamba Perceptron (Minsky and Papert, 1969). The main
distinguishing features of the GSH model are that every hidden unit sees the whole input pattern x and that
the hidden units are random LTGs. For the trainable output units, the GSH model uses Ho-Kashyap
learning in Hassoun's version and the pocket algorithm [a modified version of the perceptron learning rule
that converges for nonlinearly separable problems. For details, see Gallant (1993)] in the Gallant and Smith
model. The hidden LTGs assume fixed integer weights and bias (threshold) generated randomly in some
range [−a, +a].

The use of random LTGs as opposed to random Boolean units as hidden units has the advantage of a "rich"
distributed representation of the critical features in the hidden activation vector z. This distributed
representation coupled with the ability of hidden LTGs to see the full input pattern makes the GSH model a
universal approximator of binary mappings. However, an important question here is how many hidden
random LTGs are required to realize any arbitrary n-input binary mapping of m points/vectors? This
question can be easily answered for the case where the hidden LTG's are allowed to have arbitrary (not
random) parameters (weights and thresholds). In this case, and recalling Problem 2.1.2 and Theorem 1.4.1,
we find that m hidden LTGs are sufficient to realize any binary function of m points in {0, 1}n. Now, if we
assume hidden LTGs with random parameters, we might intuitively expect that the required number of
hidden units in the GSH model for approximating any binary function of m points will be much greater than
m. Empirical results show that the above intuitive answer is not correct! Simulations with the 7-bit parity
function, random functions, and other completely as well as partially specified Boolean functions reveal

that the required number of random LTGs is between m and m (Gallant and Smith, 1987). Note that for
the worst case scenario of a complex completely specified n-input Boolean function where m = 2n, the
number of hidden LTGs in the GSH model still scales exponentially in n.

The above result on the size of the hidden layer in the GSH model may be partially explained in terms of
the mapping properties of the random LTG layer. Consider two n-dimensional binary input vectors x' and
x" which are mapped by the random LTG layer of the GSH model into the J-dimensional binary vectors z'
and z", respectively. Let us assume n is large. Also, assume that the weights and thresholds of the LTGs are

generated according to the normal distributions and , respectively. Using a result


of Amari (1974, 1990), we may relate the normalized Hamming distances Dz and Dx according to

(6.2.2)

where
(6.2.3)

The parameter A is close to unity if the weights and thresholds are identically distributed or as long as

. Figure 6.2.3 shows a plot of Equation (6.2.2) for A = 1 and .

Figure 6.2.3. Normalized Hamming distance between two hidden layer activation vectors versus the
distance between the two corresponding input vectors.

For small values of Dx, Dz is small so that the output activity of the hidden layer is similar for similar
inputs. Therefore, the random LTG layer exhibits robustness-preserving features. Also, since

approaches infinity when Dx approaches zero, the differences among similar vectors in
a small neighborhood of the input space are amplified in the corresponding z vectors. Such a property is
useful for recognizing differences among very similar input vectors as is often desired when dealing with
Boolean mappings (e.g., parity) or highly nonlinear functions. Equation 6.2.2 also implies that very
different inputs map into very different outputs when . This richness of distributed representation of
input features at the hidden layer's output helps increase the probability of the output unit(s) to find good
approximations to arbitrary mappings. In addition to the above desirable features, the GSH model is very
appealing from a hardware realization point of view. Here, the restricted dynamic range integer parameters
of the hidden LTGs allow for their simple realization in VLSI technology. We also note that the range
[−a, +a] of the hidden LTGs integer parameters can be made as small as [−1, +1]; however, we pay for this
desirable reduced dynamic range by having to increase the number of hidden LTGs. This phenomenon is
intuitively justifiable and has been verified empirically in Hassoun (1988).

6.3 Unit-Allocating Adaptive Networks

In Chapter Four (Section 4.9) we saw that training a multilayer feedforward neural network with fixed
resources requires, in the worst case, exponential time. On the other hand, polynomial time training is
possible, in the worst case, if we are willing to use unit-allocating nets which are capable of allocating new
units, as needed, as more patterns are learned ( Baum, 1989).

In the context of pattern classification, the classical k-nearest neighbors classifier ( Fix and Hodges, 1951;
Duda and Hart, 1973) is an extreme example of a unit-allocating machine with O(1) learning complexity.
The k-nearest neighbors classifier assigns to any new input the class most heavily represented among its k-
nearest neighbors. This classifier represents an extreme unit-allocating machine since it allocates a new unit
for every learned example in a training set. There are no computations involved in adjusting the
"parameters" of allocated units: Each new allocated unit stores exactly the current example (vector)
presented. In other words, no transformation or abstraction of the examples in the training set is required
and one can immediately proceed to use this machine for classification regardless of the size of the training
set. Therefore, the training time of this classifier does not scale with the number of examples, m, which
means that the complexity is O(1). Surprisingly, it has been shown ( Cover and Hart, 1967) that in the limit
of an infinite training set, this simple classifier has a probability of classification error less than twice the
minimum achievable error probability of the optimal Bayes classifier for any integer value of k 1.
Unfortunately, the performance of the nearest neighbor classifier deteriorates for training sets of small size.
Also, the convergence to the above asymptotic performance can be arbitrarily slow, and the classification
error rate need not even decrease monotonically with m ( Duda and Hart, 1973).

Even when utilizing a large training set, k-nearest neighbors classifiers are impractical as on-line classifiers
due to the large number of computations required in classifying a new input. Thus, we are forced to use far
fewer than one unit for every training sample; i.e., we must create and load an abstraction of the training
data. This obviously leads to higher learning complexity than O(1).

Practical trainable networks should have a number of desirable attributes. The most significant of these
attributes are fast learning speed, accurate learning, and compact representation of training data. The reason
for desiring the first two attributes is obvious. On the other hand, the formation of a compact representation
is important for two reasons: Good generalization (less free parameters leads to less overfitting) and
feasibility of hardware (VLSI) realization since silicon surface area is the premium.

In the following, we consider three practical unit allocating networks. These networks are capable of
forming compact representations of data easily and rapidly. Two of the networks considered are classifier
networks. The third network is capable of classification as well as approximation of continuous functions.

6.3.1 Hyperspherical Classifiers

Pattern classification in n-dimensional space consists of partitioning the space into category (class) regions
with decision boundaries and assigning an unknown point in this space to the class in whose region it falls.
The typical geometrical shape of the decision boundaries for classical pattern classifiers are hyperplanes
and hypersurfaces. In this section, we discuss two unit-allocating adaptive networks/classifiers which
employ hyperspherical boundary forms.

Hyperspherical classifiers were introduced by Cooper (1962, 1966), Batchelor and Wilkins (1968) and
Batchelor (1969) [See Batchelor (1974) for a summary of early work]. Like the nearest-neighbor classifier,
a hyperspherical classifier is based upon the storage of examples represented as points in a metric space
(e.g., Euclidean space). The metric defined on this space is a measure of the distance between an unknown
input pattern and a known category. Each stored point has associated with it a finite "radius" that defines the
point's region of influence. The interior of the resulting hypersphere represents the decision region
associated with the center point's category. This region of influence makes a hyperspherical classifier
typically more conservative in terms of storage than the nearest neighbor classifier. Furthermore, the finite
radii of the regions of influence can make a hyperspherical classifier abstain from classifying patterns from
unknown categories (these patterns are typically represented as points in the input space which are far away
from any underlying class regions). This later feature enhances the classifiers ability to reject "rubbish."

Restricted Coulomb Energy (RCE) Classifier

The following is a description of a specific network realization of a hyperspherical classifier proposed by


Reilly et al. (1982) and Reilly and Cooper (1990). This model is named the "restricted Coulomb energy"
(RCE) network. The name is derived from the form of the "potential function" governing the mapping
characteristics, which has been interpreted (Scofield et al., 1988) as a restricted form of a "high-dimensional
Coulomb potential" between a positive test charge and negative charges placed at various sites.

The architecture of the RCE network contains two layers: A hidden layer and an output layer. The hidden
layer is fully interconnected to all components of an input pattern (vector) x Rn. The output layer consists
of L units. The output layer is sparsely connected to the hidden layer; each hidden unit projects its output to
one and only one output unit. The architecture of the RCE net is shown in Figure 6.3.1. Each unit in the
output layer corresponds to a pattern category. The network assigns an input pattern to a category l if the
output cell yl is activated in response to the input. The decision of the network is "unambiguous" if one and
only one output unit is active upon the presentation of an input, otherwise, the decision is "ambiguous."
Figure 6.3.1. RCE network architecture.

The transfer characteristics of the jth hidden unit is given by

(6.3.1)

where j Rn is a parameter vector called "center", rj R is a threshold or "radius," and D is some predefined
distance metric between vectors j and x (e.g., Euclidean distance between real-valued vectors or Hamming
distance between binary-valued vectors). Here, f is the threshold activation function given by

(6.3.2)

On the other hand, the transfer function of a unit in the output layer is the logical OR function. The jth
hidden unit in the RCE net is associated with a hyperspherical region of the input space which defines the
unit's region of influence. The location of this region is defined by the center j and its size is determined by
the radius rj. According to Equation (6.3.1), any input pattern falling within the influence region of a hidden
unit will cause this unit to fire. Thus, the hidden units define a collection of hyperspheres in the space of
input patterns. Some of these hyperspheres may overlap. When a pattern falls within the regions of
influence of several hidden units, they will all "fire" and switch on the output units they are connected to.

Training the RCE net involves two mechanisms: Unit commitment and modification of hidden unit radii.
Units may be committed to the hidden and output layers. When committed, units are interconnected so that
they do not violate the RCE interconnectivity pattern described above.

Initially, the network starts with no units. An arbitrary sample pattern x1 is selected from the training set
and one hidden unit and one output unit are allocated. The allocated hidden unit center 1 is set equal to x1
and its radius r1 is set equal to a user-defined parameter rmax (rmax is the maximum size of the region of
influence ever assigned to a hidden unit). This unit is made fully interconnected to the input pattern and
projects its output z1 to the allocated output unit (OR gate). This output unit represents the category of the
input x1. Next, we choose a second arbitrary example x2 and feed it to the current network. Here, one of
three scenarios emerges. First, if x2 causes the output unit to fire and if x2 belongs to the category
represented by this unit, then do nothing and continue training with a new input. In general, this scenario
might occur at a point during training where the network has multiple hidden and output units representing
various categories. In this case, if the input pattern causes only the output unit representing the correct
category to fire, then do nothing and continue the training session with a new input. On the other hand, the
correct output unit may fire along with one or more other output units. This indicates that the regions of
influence of hidden units representing various categories overlap and that the present input pattern lies
inside the overlap region. Here, proceed by reducing the threshold values (radii) of all active hidden units
that are associated with categories other than the correct one until they become inactive.

The second scenario involves the case when the input x2 happens to belong to the same category as x1 but
does not cause the output unit to fire. Here, allocate a new hidden unit with center at 2 = x2 and radius rmax
and connect the output z2 of this unit to the output unit. The general version of this scenario occurs when the
network has multiple output units. Now, if the input pattern causes no output units (including the one
representing the category of the input) to fire, then allocate a new hidden unit centered at the current input
vector/pattern and assign it a radius r = min(rmax, dmin), where dmin is the distance from this new center to
the nearest center of a hidden unit representing any category different from that of the current input pattern.
The new allocated unit is connected to the output unit representing the category of the input pattern. Note
that this setting of r may cause one or more output units representing the wrong category to fire. This
should not be a problem since the shrinking of the region of influence mechanism described under the first
scenario will ultimately rectify the situation. If, under this scenario, some hidden units representing the
wrong category fire, then the radii of such units are shrunk as described earlier under the first scenario.

Finally, the third scenario represents the case of an input with a new category that is not represented by the
network. Here, as in the first step of the training procedure, a hidden unit centered at this input is allocated
and its radius is set as in the second scenario. Also, a new output unit representing the new category is
added which receives an input from the newly allocated hidden unit. Again if existing hidden units become
active under this scenario, then their radii are shrunk until they become inactive. The training phase
continues (by cycling through the training set or by updating in response to a stream of examples) until no
new units are allocated and the size of the regions of influence of all hidden units converges.

The RCE net is capable of developing proper separating boundaries for nonlinearly separable problems.
The reader is referred to Figure 6.3.2 for a schematic representation of separating boundaries realized by the
regions of influence for a nonlinearly separable two-class problem in two-dimensional pattern space. The
RCE net can also handle the case where a single category is contained in several disjoint regions. In
principle, an arbitrary degree of accuracy in the separating boundaries can be achieved if no restriction is
placed on the size of the training set. Dynamic category learning is also possible with the RCE network.
That is, new classes of patterns can be introduced at arbitrary points in training without always involving
the need to retrain the network on all of its previously trained data. Note that, in its present form, the RCE
network is not suitable for handling overlapping class regions. Here, the learning algorithm will tend to
drive the radii of all hidden unit regions of influence to zero. It also leads to allocating a large number of
hidden units approximately equal to the number of training examples coming from the regions of overlap.

Figure 6.3.2. Schematic diagram for an RCE classifier in two-dimensional space solving a nonlinearly
separable problem.
Several variations of the RCE network are possible. For example, one might employ mechanisms that allow
the centers of the hyperspheres to drift to more optimal locations in the input space. A second variation
would be to allow the hyperspheres to grow. These two mechanisms have been considered for more general
hyperspherical classifiers than RCE classifiers (Batchelor, 1974). Modifications to the RCE network for
handling overlapping class regions can be found in Reilly and Cooper (1990). Empirical examination of
RCE classifiers appeared in Lee and Lippmann (1990) and Hudak (1992).

Polynomial-Time Trained Hyperspherical Classifier

In the following, we describe a classifier network with an architecture identical to the RCE network (see
Figure 6.3.1), but which employs a training algorithm that is shown to construct and train the classifier
network in polynomial time. This polynomial time classifier (PTC) uses clustering and linear programming
models to incrementally generate the hidden layer. Our description of the PTC net is based on the work of
Roy and Mukhopadhyay (1991), Mukhopadhyay et al. (1993), and Roy et al. (1993).

As in the RCE net, the PTC net uses hidden units to cover class regions. However, the region of influence
of each hidden unit need not be restricted to the hypersphere. A quadratic region of influence is assumed
which defines the transfer characteristics of a hidden unit according to:

(6.3.3)

where

(6.3.4)

and w0, wi, and wij are modifiable real-valued parameters to be learned. Here, is greater than or equal to a
small positive constant (say 0.001) and its value is computed as part of the training procedure described
below. With the transfer characteristics as in Equation (6.3.3), one may view each hidden unit as a quadratic
threshold gate (QTG), introduced in Chapter One. A quadratic region of influence contains the hypersphere
as a special case but allows for the realization of hyperellipsoids and hyperboloids. This enables the PTC to
form more accurate boundary regions than the RCE classifier. Other regions of influence may be used in the
PTC as long as they are represented by functions which are linear in the parameters to be learned: w0, wi,

and wij. For example, is also an acceptable function


describing the regions of influence.

The learning algorithm determines the parameters of the hidden units in such a way that only sample
patterns of a designated class are "covered" by a hidden unit representing this class. The algorithm also
attempts to minimize the number of hidden units required to accurately solve a given classification problem.

Initially, the learning algorithm attempts to use a single hidden unit to cover a whole class region. If this
fails, the sample patterns in that class are split into two or more clusters, using a clustering procedure (e.g.,
the k-means procedure described in Section 6.1) and then attempts are made to adapt separate hidden units
to cover each of these clusters. If that fails, or if only some clusters are covered, then the uncovered clusters
are further split to be separately covered until covers are provided for each ultimate cluster. The idea here is
to allow a hidden unit to cover (represent) as many of the sample patterns within a given class as is feasibly
possible (without including sample patterns from any other class), thereby minimizing the total number of
hidden units needed to cover that class.
The parameters of each hidden unit are computed by solving a linear programming problem [for an
accessible description of linear programming, the reader is referred to Chapter Five in Duda and Hart
(1973)]. The linear program is used to adjust the location and boundaries of the region of influence of a
hidden unit representing a given class cluster such that sample patterns from this cluster cause the net inputs
to the hidden unit [the argument of f in Equation (6.3.3)] to be at least slightly positive and those from all
other classes to be at least slightly negative. Linear programming is appropriate here because the regions of
influence are defined by quadratics which are linear functions of their parameters. Formally put, the linear
program set up for adjusting the parameters of the jth hidden unit is as follows:

The positive margin j ensures a finite separation between the classes and prevents the formation of common
boundaries. Unit j becomes a permanent fixed unit of the PTC net if and only if the solution to the above
problem is feasible. Roy et al. (1993) gave an extension to the above training method which enhances the
PTC performance for data with outlier patterns. An alternative to linear programming is to use the Ho-
Kashyap algorithm described in Section 3.1.4 which guarantees class separation (with finite separation
between classes) if a solution exists or, otherwise, gives an indication of nonlinear separability.

Similar to the RCE net, all hidden units in the PTC whose respective regions of influence cover clusters of
the same class have their outputs connected to a unique logical OR unit (or an LTG realizing the OR
function) in the output layer.

The polynomial complexity of the above training algorithm is shown next. For each class label

c = 1, 2, ..., L, let mc be the number of pattern vectors (for a total of patterns) to be covered.
Consider the worst case scenario (from a computational point of view) where a separate cover (hidden unit)
is required for each training pattern. Thus, in this case, all of the linear programs from Equation (6.3.5) for a
class will be infeasible until the class is broken up into mc single point clusters. All single point clusters will
produce feasible solutions which implies that we have just designed a PTC with perfect recall on the
training set (note, however, that such a design will have poor generalization!). Let us further assume the
worst case scenario in which the mc pattern vectors are broken up into one extra cluster at each clustering
stage. Using simple counting arguments, we can readily show that for successful training a total of

linear programs (feasible and infeasible combined) are solved and clustering
performed m times. Now, since each linear program can be solved in polynomial time (Karmarkar, 1984;
Khachian, 1979) and each clustering operation to obtain a specified number of clusters can also be
performed in polynomial time (Everitt, 1980; Hartigan, 1975), it follows that the above learning algorithm
is of polynomial complexity.

What remains to be seen is the generalization capability of PTC nets. As for most artificial neural nets, only
empirical studies of generalization are available. One such study (Roy et al., 1993) reported comparable
classification performance of a PTC net to the k-nearest neighbors classifier and backprop nets on relatively
small sample classification tasks. In general, a PTC net allocates a much smaller number of hidden units
compared to the RCE net when trained on the same data. However, the PTC training procedure requires
simultaneous access to all training examples which makes the PTC net inapplicable for on-line
implementations.

6.3.2 Cascade-Correlation Network

The Cascade-Correlation Network (CCN) proposed by Fahlman and Lebiere (1990) is yet another example
of a unit-allocating architecture. The CCN was developed in an attempt to solve the so-called "moving
target problem," which is attributed to the slowness of backprop learning. Because all of the weights in a
backprop net are changing at once, each hidden unit sees a constantly changing environment. Therefore,
instead of moving quickly to assume useful roles in the overall problem solution, the hidden units engage in
a complex "dance" with much wasted motion (Fahlman and Lebiere, 1990).

The CCN differs from all networks considered so far in two major ways: (1) It builds a deep net of cascaded
units (as opposed to a net with a wide hidden layer) and (2) it can allocate more than one type of hidden
unit; for example, sigmoidal units and Gaussian units may coexist in the same network. This network is
suited for classification tasks or approximation of continuous functions. The CCN has a significant learning
speed advantage over backprop nets, since units are trained individually without requiring back propagation
of error signals.

Figure 6.3.3. The cascade-correlation network architecture with three hidden units. The number of each
hidden unit represents the order in which it has been allocated.

The CCN architecture after allocating three hidden units is illustrated in Figure 6.3.3. The number of output
units is dictated by the application at hand, and by the way the designer chooses to encode the outputs. The
hidden units can be sigmoid units, Gaussian units, etc. or a mixture of such units. An important requirement
on a candidate hidden unit is that it has a differentiable transfer function. The original CCN uses sigmoid
units as hidden units. The output units can also take various forms, but typically a sigmoid (or hyperbolic
tangent) activation unit is employed for classification tasks. Linear output units are employed when the
CCN is used for approximating mappings with real-valued outputs (e.g. function
approximation/interpolation).

Every input (including a bias x0 = 1) is connected to every output unit by a connection with an adjustable
weight. Each allocated hidden unit receives a connection from each pre-existing hidden unit. Therefore,
each added hidden unit defines a new one-unit layer. This may lead to a very deep network with high fan-in
for the hidden and output units.

The learning algorithm consists of two phases: Output unit training and hidden unit training. Initially, the
network has no hidden units. In the first phase, output unit weights are trained (e.g., using the delta rule) to
minimize the usual sum of squared error (SSE) measure at the output layer. At the completion of training, if

the SSE remains above a predefined threshold, the residual errors (the difference between the
actual and desired output) is recorded for each output unit l on each training pattern xk, k = 1, 2, ..., m. Also,
a new hidden unit is inserted whose weights are determined by the hidden unit training phase described
below. Note that if the first training phase converges (SSE error is very small) with no hidden units, then
training is stopped (this convergence would be an indication that the training set is linearly separable,
assuming we are dealing with a classification task).

In the hidden unit training phase, a pool of randomly initialized candidate hidden units (typically four to
eight units) is trained in parallel. Later, one trained unit from this pool, the one which best optimizes some
performance measure, is selected for permanent placement in the net. This multiple candidate training
strategy minimizes the chance that a useless unit will be permanently allocated because an individual
candidate has gotten stuck at a poor set of weights during training. Each candidate hidden unit receives
trainable input connections from all of the network's external inputs and from all pre-existing hidden units,
if any. During this phase, the outputs of these candidate units are not yet connected to any output units in
the network. Next, the weights of each candidate unit are adjusted, independently of other candidates in the
pool, in the direction of maximizing a performance measure E. Here, E is chosen as the sum, over all output
units, of the magnitude of the covariance between the candidate unit's output zk and the residual output error

, observed at unit l. Formally, the criterion E is defined as:

(6.3.6)

where and are average values taken over all patterns xk. The maximization of E by each
candidate unit is done by incrementing the weight vector w for this unit by an amount proportional to the

gradient ; i.e., performing steepest gradient ascent on E. Note that is recomputed every time w is
incremented by averaging the unit's outputs due to all training patterns. During this training phase, the
weights of any pre-existing hidden units are frozen. In fact once allocated, a hidden unit never changes its
weights. After the training reaches an asymptote, the candidate unit which achieves the highest covariance
score E is added to the network by fanning out its output to all output layer units through adjustable
weights. The motivation here is that, by maximizing covariance, a unit becomes tuned to the features in the
input pattern which have not been captured by the existing net. This unit then becomes a permanent feature-
detector in the network, available for producing outputs or for creating other, more complex feature
detectors. At this point, the network repeats the output layer training phase using the delta learning rule, and

the residual output errors are recomputed. The two training phases continue to alternate until the output
SSE is sufficiently small, which is almost always possible.

The CCN has been empirically shown to be capable of learning some hard classification-type tasks 10 to
100 times faster than backprop. Fahlman and Lebiere (1990) have empirically estimated the learning time in
epochs to be roughly J log(J), where J is the number of hidden units ultimately needed to solve the given
task. Unfortunately, though, a precise value of J is almost impossible to determine. In addition, the CCN is
capable of learning difficult tasks on which backprop nets have been found to get stuck in local minima.
One of these difficult tasks is the n-bit parity problem. Another is the two spiral problem shown in Figure
6.3.4 (a).

The two spiral problem was proposed by Alex Wieland as a benchmark which is extremely hard for a
standard backprop net to solve. The task requires a network with two real-valued inputs and a single output
to learn a mapping that distinguishes between points on two intertwined spirals. The associated training set
consists of 194 point coordinates (x1, x2), half of which come from each spiral. The output should be +1 for
the first spiral and −1 for the other spiral. Figure 6.3.4 (b) shows a solution to this problem generated by a
trained CCN (Fahlman and Lebiere, 1990). This task requires, typically, 12 to 19 sigmoidal hidden units
and an average of 1,700 training cycles when a pool of 8 candidate hidden units is used during training. For
comparison purposes, we show in Figure 6.3.4 (c) a solution generated by a backprop network employing
three hidden layers of five units each and with "shortcut" connections between layers (Lang and Witbrock,
1989). Here, each unit receives incoming connections from every unit in every earlier layer, not just from
the immediately preceding layer.

The backprop net requires about 20,000 training cycles and about 8,000 cycles if Fahlman's version of
backprop (see Section 5.2.3) is used. Therefore, the CCN outperforms standard backprop in training cycles
by a factor of 10, while building a network of about the same complexity. In terms of actual computation on
a serial machine, however, the CCN shows 50-fold speedup over standard backprop on the two spiral task.
This is due to the lower number of computations constituting a single CCN training cycle compared to that
of standard backprop. Note also that the solution generated by the CCN is qualitatively better than that
generated by backprop (the reader might have already noticed a difference in the spiral directions between
Figures 6.3.4 (b) and (c). This is simply because the simulation in (c) used training spirals which have the
opposite direction to those shown in Figure 6.3.4 (a).

(a) (b) (c)

Figure 6.3.4. (a) Training samples for the two spiral problem. (b) Solution found by a cascade-correlation
network. (c) Solution found by a 3-hidden layer backprop network which employs "short-cut" connections.
[(a) and (b) are from S. E. Fahlman and C. Lebiere, 1990, and (c) is from K. J. Lang and M. J. Witbrock,
1989, with permission of Morgan Kaufmann.]

On the other hand, when used for function approximation, backprop outperforms the CCN. Simulations
with the Mackey-Glass time series prediction task show poor generalization performance for the CCN
(Crowder, 1991); in this case, the CCN suffers from overfitting. Another undesirable feature of the CCN is
its inefficient hardware implementation: The deep layered architecture leads to a delay in response
proportional to the number of layers. Also, the high fan-in of the hidden units imposes additional
implementation constraints for VLSI technology; high device fan-in leads to increased device capacitance
and thus slower devices. Finally, it should be noted that the CCN requires that all training patterns be

available for computing the averages and after each training cycle, which makes the CCN
inappropriate for on-line implementations.

6.4 Clustering Networks


The task of pattern clustering is to automatically group unlabeled input vectors into several categories
(clusters), so that each input is assigned a label corresponding to a unique cluster. The clustering process is
normally driven by a similarity measure. Vectors in the same cluster are similar, which usually means that
they are "close" to each other in the input space. A simple clustering net which employs competitive
learning has already been covered in Section 3.4. In this section, two additional clustering neural networks
are described which have more interesting features than the simple competitive net of Chapter Three. Either
network is capable of automatic discovery (estimation) of the underlying number of clusters.

There are various ways of representing clusters. The first network we describe uses a simple representation
in which each cluster is represented by the weight vector of a prototype unit (this is also the prototype
representation scheme adapted by the simple competitive net). The first network is also characterized by its
ability to allocate clusters incrementally. Networks with incremental clustering capability can handle an
infinite stream of input data. They don't require large memory to store training data because their cluster
prototype units contain implicit representation of all the inputs previously encountered.

The second clustering network described in this section has a more complex architecture than the first net. It
employs a distributed representation as opposed to a single prototype unit cluster representation. This
network is non-incremental in terms of cluster formation. However, the highly nonlinear multiple layer
architecture of this clustering net enables it to perform well on very difficult clustering tasks. Another
interesting property of this net is that it does not require an explicit user defined similarity measure! The
network develops its own internal measure of similarity as part of the training phase.

6.4.1 Adaptive Resonance Theory (ART) Networks

Adaptive resonance architectures are artificial neural networks that are capable of stable categorization of
an arbitrary sequence of unlabeled input patterns in real time. These architectures are capable of continuous
training with non-stationary inputs. They also solve the "stability-plasticity dilemma," namely, they let the
network adapt yet prevent current inputs from destroying past training. The basic principles of the
underlying theory of these networks, know as adaptive resonance theory (ART), were introduced by
Grossberg (1976). ART networks are biologically motivated and were developed as possible models of
cognitive phenomena in humans and animals.

A class of ART architectures, called ART1, is characterized by a system of ordinary differential equations
(Carpenter and Grossberg, 1987a) with associated theorems proving its self-stabilization property and the
convergence of its adaptive weights. ART1 embodies certain simplifying assumptions which allows its
behavior to be described in terms of a discrete-time clustering algorithm. A number of
interpretations/simplifications of the ART1 net have been reported in the literature (e.g., Lippmann, 1987 ;
Pao, 1989; Moore, 1989). In the following, we adopt Moore's abstraction of the clustering algorithm from
the ART1 architecture and discuss this algorithm in conjunction with a simplified architecture.

The basic architecture of the ART1 net consists of a layer of linear units representing prototype vectors
whose outputs are acted upon by a winner-take-all network (described in Section 3.4.1). This architecture is
identical to that of the simple competitive net in Figure 3.4.1 with one major difference: The linear
prototype units are allocated dynamically, as needed, in response to novel input vectors. Once a prototype
unit is allocated, appropriate lateral-inhibitory and self-excitatory connections are introduced so that the
allocated unit may compete with pre-existing prototype units. Alternatively, one may assume a pre-wired
architecture as in Figure 3.4.1 with a large number of inactive (zero weight) units. Here, a unit becomes
active if the training algorithm decides to assign it as a cluster prototype unit, and its weights are adapted
accordingly.

The general idea behind ART1 training is as follows. Every training iteration consists of taking a training
example xk and examining existing prototypes (weight vectors wj) that are sufficiently similar to xk. If a
prototype wi is found to "match" xk (according to a "similarity" test based on a preset matching threshold),
example xk is added to wi's cluster and wi is modified to make it better match xk. If no prototype matches
xk, then xk becomes the prototype for a new cluster. The details of the ART1 clustering procedure are
considered next.
The input vector (pattern) x in ART1 is restricted to binary values, x {0, 1}n. Each learned cluster, say
cluster j, is represented by the weight vector wj {0, 1}n of the jth prototype unit. Every time an input vector
x is presented to the ART1 net, each existing prototype unit computes a normalized output (the motivation
behind this normalization is discussed later)

(6.4.1)

and feeds it to the winner-take-all net for determining the winner unit. Note that yj is the ratio of the overlap
between prototype wj and x to the size of wj. The winner-take-all net computes a "winner" unit i. Subject to
further verification, the weight vector of the winner unit wi now represents a potential prototype for the
input vector. The verification comes in the form of passing two tests.

In order to pass the first test, the input x must be "close enough" to the winner prototype wi, i.e.,

(6.4.2)

Here, ||x||2 and ||wi||2 are equal to the number of 1's in x and wi, respectively. Passing this test guarantees
that a sufficient fraction of the wi and x bits are matched.

The second test is a match verification test between wi and x. This test is passed if

(6.4.3)

where 0 < < 1 is a user defined "vigilance" parameter. Here, wi is declared to "match" x if a significant
fraction (determined by ) of the 1's in x appear in wi. Note that Equation (6.4.3) causes more differentiation
among input vectors of smaller magnitude. This feature of ART1 is referred to as "automatic scaling," "self-
scaling," or "noise-insensitivity."

If the above two tests are passed by the winner unit i for a given input x (here, the network is said to be in
resonance), then x joins cluster j and this unit's weight vector wi is updated according to

= wi x (6.4.4)

where "" stands for the logical AND operation applied component-wise to the corresponding components of
vectors wi and xi. According to Equation (6.4.4) a new prototype wi can only have fewer and fewer 1's as
training progresses. Note, that it is possible for a training example to join a new cluster but eventually to
leave that cluster because other training examples have joined it.

The second scenario corresponds to unit i passing the first test but not the second one. Here, the ith unit is
deactivated (its output is clamped to zero until a new input arrives) and the tests are repeated with the unit
with the next highest normalized output. If this scenario persists even after all existing prototype units are
exhausted, then a new unit representing a new cluster j is allocated and its weight vector wj is initialized
according to

wj = x (6.4.5)
In a third scenario, unit i does not pass the first test. Here, wi is declared "too far" from the input x and a
new unit representing a new cluster, j, is allocated with its weight vector wj initialized as in Equation
(6.4.5). Hence, initially wj is a binary vector. And, since wj is updated according to Equation (6.4.4), the
vector wj conserves its binary nature. This is true for any unit in the ART1 net since all units undergo the
initialization in Equation (6.4.5) upon being allocated.

Note that the learning dynamics in the second scenario described above constitutes a search through the

prototype vectors, looking at the closest, next closest, etc. by the maximum criterion. This search is
continued until a prototype vector is found that satisfies the matching criterion in Equation (6.4.3). These
criteria are different. The first criterion measures the fraction of the bits in wj that are also in x, whereas the
second criterion measures the fraction of the bits in x that are also in wj. So going further away by the first
measure may actually bring us closer by the second. It should also be ntoed that this search only occurs
before stability is reached for a given training set. After that, each prototype vector is matched on the first
attempt and no search is needed.

The normalization factor ||wi||2 in Equation (6.4.1) is used as a "tie-breaker." It favors smaller magnitude
prototype vectors over vectors which are supersets of them (i.e., have 1's in the same places) when an input
matches them equally well. This mechanism of favoring small prototype vectors helps maintain prototype
vectors apart. It also helps compensate for the fact that in the updating step of Equation (6.4.4), the
prototype vectors always move to vectors with fewer 1's.

The vigilance parameter in Equation (6.4.3) controls the granularity of the clusters generated by the ART1
net. Small values allow for large deviations from cluster centers, and hence lead to a small set of clusters.
On the other hand, a higher vigilance leads to a larger number of tight clusters. Regardless of the setting of ,
the ART1 network is stable for a finite training set; i.e., the final clusters will not change if additional
training is performed with one or more patterns drawn from the original training set. A key feature of the
ART1 network is its continuous learning ability. This feature coupled with the above stability result allows
the ART1 net to follow nonstationary input distributions.

The clustering behavior of the ART1 network is illustrated for a set of random binary vectors. Here, the task
of ART1 is to cluster twenty four uniformly distributed random vectors x {0, 1}16. Simulation results are
shown in Figure 6.4.1 (a) and (b) for = 0.5 and = 0.7, respectively (here, the vectors are shown as 4 4
patterns of "on" and "off" pixels for ease of visualization). The resulting prototype vectors are also shown.
Note the effect of the vigilance parameter setting on the granularity of the generated clusters.

The family of ART networks also include more complex models such as ART2 (Carpenter and Grossberg,
1987b) and ART3 (Carpenter and Grossberg, 1990). These ART models are capable of clustering binary
and analog input patterns. However, these models are inefficient from a computational point of view. A
simplified model of ART2, ART2-A, has been proposed which is two to three orders of magnitude faster
than ART2 (Carpenter et al., 1991b). Also, a supervised real-time learning ART model called ARTMAP
has been proposed (Carpenter et al., 1991a).

(a)
(b)

Figure 6.4.1. Random binary pattern clustering employing the ART1 net. Different vigilance values cause
different numbers of categories (clusters) to form: (a) = 0.5 and (b) = 0.7. For each case, the top row
shows prototype vectors extracted by the ART1 network.

An example of ART2 clustering is shown in Figure 6.4.2 (Carpenter and Grossberg, 1987b). Here, the
problem is to cluster a set of fifty 25-dimensional real-valued input signals (patterns). The results are shown
for two different vigilance levels. It is left to the reader to check (subjectively) the consistency of the
formed clusters. Characterization of the clustering behavior of ART2 was given by Burke (1991) who
draws an analogy between ART-based clustering and k-means-based clustering.

ART networks are sensitive to the presentation order of the input patterns; they may yield different
clustering on the same data when the presentation order of patterns is varied (with all other parameters kept
fixed). Similar effects are also present in incremental versions of classical clustering methods such as k-
means clustering (i.e., k-means is also sensitive to the initial choice of cluster centers).

(a)

(b)

Figure 6.4.2. ART2 clustering of analog signals for two vigilance levels: The vigilance value in (a) is
smaller than that in (b). (From G. Carpenter and S. Grossberg, 1987b, Applied Optics, 26(23), pp. 4919-
4930, ©1987 Optical Society of America.)

6.4.2 Autoassociative Clustering Network

Other data clustering networks can be derived from "concept forming" cognitive models (Anderson, 1983 ;
Knapp and Anderson, 1984 ; Anderson and Murphy, 1986). A "concept" describes the situation where a
number of different objects are categorized together by some rule or similarity relationship. For example, a
person is able to recognize that physically different objects are really "the same" (e.g., a person's concept of
"tree").
A simple concept forming model consists of two basic interrelated components. First, it consists of a
prototype forming component, which is responsible for generating category prototypes via an
autoassociative learning mechanism. The second component is a retrieval mechanism where a prototype
becomes an attractor in a dynamical system. Here, the prototype and its surrounding basin of attraction
represent an individual concept.

Artificial neural networks based on the above concept forming model have been proposed for data
clustering of noisy, superimposed patterns (Anderson et al., 1990 ; Spitzer et al., 1990; Hassoun et al., 1992,
1994a, 1994b). Here, a feedforward (single or multiple layer) net is trained in an autoassociative mode
(recall the discussion on autoassociative nets in Section 5.3.6) with the noisy patterns. The training is
supervised in the sense that each input pattern to the net serves as its own target. However, these targets are
not the "correct answers" in the general sense of supervised training. The strategy here is to force the
network to develop internal representations during training so as to better reconstruct the noisy inputs. In
the prototype extraction phase, the trained feedforward net is transformed into a dynamical system by using
the output layer outputs as inputs to the net, thus forming an external feedback loop. Now, and with proper
stabilization, when initialized with one of the input patterns the dynamical system will evolve and
eventually converge to the "closest" attractor state. Hopefully, these attractors may be identified with the
prototypes derived from the training phase.

Next, we describe the above ideas in the context of a simple single layer network (Hassoun and Spitzer,
1988; Anderson et al., 1990) and then present a more general autoassociative clustering architecture.
Consider an unlabeled training set of vectors Rn representing distorted and/or noisy versions of a set
{xk; k = 1, 2, ..., m} of m unknown prototype vectors. Also, assume a single layer net of n linear units each
having an associated weight vector wi, i = 1, 2, ..., n. Let us denote by W the n n matrix whose ith row is the

weight vector . This simple network outputs the vector y Rn upon the presentation of an input where
y = W . Now, we update the connection matrix W in response to the presentation of a sample
according to the matrix form of the Widrow-Hoff (LMS) rule

(6.4.6)

which realizes a gradient descent minimization of the criterion function

(6.4.7)

Therefore, the trained net will be represented by a connection matrix W which approximates the mapping

Wx = x (6.4.8)

for all m prototype vectors x. This approximation results from minimizing J in Equation (6.4.7) and it
requires that the clusters of noisy samples associated with each prototype vector are sufficiently
uncorrelated. In addition, it requires that the network be incapable of memorizing the training
autoassociations and that the number of underlying prototypes m to be learned be much smaller than n.

According to Equation (6.4.8), the autoassociative training phase attempts to estimate the unknown
prototypes and encode them as eigenvectors of W with eigenvalues of 1. A simple method for extracting
these eigenvalues (prototypes) is to use the "power" method of eigenvector extraction. This is an iterative
method which can be used to pick out the eigenvector with the largest-magnitude eigenvalue of a matrix A
by repeatedly passing an initially random vector c0 through the matrix, according to (also, refer to footnote
6 in Chapter 3)

ck+1 = Ack, k = 0, 1, 2, ... (6.4.9)


After a number of iterations, the eigenvector with the largest magnitude eigenvalue will dominate. The
above eigenvector extraction method can be readily implemented by adding external feedback to our simple
net, with A represented by W and c by y or x.

Once initialized with one of the noisy vectors , the resulting dynamical system evolves its state (the n-
dimensional output vector y) in such a way that this state moves towards the prototype vector x "most
similar" to . This is possible because the prototype vectors x approximate the dominating eigenvectors of
W (with eigenvalues close to '1'). Note that the remaining eigenvectors of W arise from learning
uncorrelated noise and tend to have small eigenvalues compared to 1. The ability of the autoassociative
dynamical net to selectively extract a learned prototype/eigenvector from a "similar" input vector, as
opposed to always extracting the most dominant eigenvector, is due to the fact that all learned prototypes
have comparable eigenvalues close to unity. Thus, the extracted prototype is the one that is "closest" to the
initial state (input vector) of the net.

The stability of the dynamic autoassociative net is an important design issue. Stability is determined by the
network weights and network architecture. Care must be taken in matching the learning algorithm for
prototype encoding in the feedforward net to the dynamic architecture which is ultimately used for
prototype extraction. One would like to design an autoassociative clustering net which minimizes an
associated energy or Liapunov function; i.e., starting from any initial state, the system's state always
evolves along a trajectory for which the energy function is monotonically decreasing (the reader may refer
to Section 7.1.2 for further details on Liapunov functions and stability). Anderson et al. (1990) reported a
stable autoassociative clustering net based on the brain-state-in-a-box (BSB) concept forming model
(Anderson et al., 1977; also, see Section 7.4.1).

A serious limitation of the autoassociative linear net we have just described is that it does not allow the user
to control the granularity of the formed clusters; i.e., the number of learned prototypes. This network will
tend to merge different clusters that are "close" to each other in the input space due to the lack of cluster
competition mechanisms (recall the similarity and vigilance tests employed in the ART1 net for controlling
cluster granularity). When two different clusters are merged by the linear net, they become represented by a
distorted prototype which is a linear combination of the two correct (but unknown) prototypes. Introducing
nonlinearity into the autoassociative net architecture can help the net escape the above limitation by
allowing control of the granularity of the clusters. This feature is discussed in connection with the dynamic
nonlinear multilayer autoassociative network (Spitzer et al., 1990 ; Wang, 1991 ; Hassoun et al., 1992)
considered next.

Consider a two hidden layer feedforward net with an output layer of linear units. All hidden layer units
employ the hyperbolic tangent activation function

(6.4.10)

where controls the slope of the activation. The activation slopes of the second hidden layer units (the layer
between the first hidden and output layers) are fixed (typically set to 1). On the other hand, the activation
slopes for the units in the first hidden layer are made monotonically increasing during training as explained
below. Each hidden unit in the first layer receives inputs from all components of an n-dimensional input
vector and an additional bias input (held fixed at 1). Similarly, each unit in the second hidden layer receives
inputs from all units in the first hidden layer plus a bias input of 1. Finally, each linear output unit receives
inputs from all second hidden layer units plus a bias.

This layered network functions as an autoassociative net, as described above. Therefore, the n output layer
units serve to reconstruct (decode) the n-dimensional vector presented at the input. We wish the network to
discover a limited number of representations (prototypes) of a set of noisy input vectors, to describe the
training data. An essential feature of the network's architecture is therefore a restrictive "bottleneck" in the
hidden layer; the number of units in each hidden layer (especially the first hidden layer) is small compared
to n. The effect of this bottleneck is to restrict the degrees of freedom of the network, and constrain it to
discovering a limited set of unique prototypes which describes (clusters) the training set. The network does
not have sufficient capacity to memorize the training set. This clustering is further enhanced by aspects of
the learning algorithm, described below. Autoassociative multilayer nets with hidden layer bottleneck have
been studied by Bourlard and Kamp (1988), Baldi and Hornik (1989), Funahashi (1990), Hoshino et al.
(1990), Kramer (1991), Oja (1991), and Usui et al. (1991) among others. In these studies, such nets have
been found to implement principal component analysis (PCA) and nonlinear PCA when one hidden layer
and three or more hidden layers are used, respectively. The reader is referred to Section 5.3.6 for details.

The learning algorithm employed is essentially the incremental backprop algorithm of Section 5.1.1 with
simple heuristic modifications. These heuristics significantly enhance the network's tendency to discover
the best prototypical representations of the training data. These modifications include a dynamic slope for
the first hidden layer activations that saturates during learning, and damped learning rate coefficients. As a
result of learning, the first hidden layer in this net discovers a set of bipolar binary distributed
representations that characterize the various input vectors. The second hidden and the output layers perform
a nonlinear mapping which decodes these representations into reduced-noise versions of the input vectors.

In order to enhance separation of clusters and promote grouping of similar input vectors, the slope of the
activation functions of first hidden layer units is made dynamic; it increases monotonically during learning,
according to

(k) = k (6.4.11)

where is greater than but close to 1 and k is the learning step index. As a result, the nonlinearity gradually
(over a period of many cycles through the whole training set) becomes the sgn (sign) function, and the
outputs of the first hidden layer become functionally restricted to bipolar binary values. As these activations
saturate, a limited number of representations for "features" of the input vectors are available. This gradually
forces "similar" inputs to activate a unique distributed representation at this layer. The larger the value of ,
the faster is the saturation of the activations which, in turn, increases the sensitivity of the first hidden layer
representations to differences among the input vectors. This increased sensitivity increases the number of
unique representations at this layer, thus leading the rest of the network to reconstruct an equal number of
prototypes. Hence, the slope saturation parameter controls cluster granularity and may be viewed as a
vigilance parameter similar to that in the ART nets.

The mapping characteristics of this highly nonlinear first hidden layer may also be thought to emerge from
a kind of nonlinear principal component analysis (PCA) mechanism where unit activities are influenced by
high order statistics of the training set. The other modification to backprop is the use of exponentially
damped learning coefficients. The tendency is for the network to best remember the most recently presented
training data. A decaying learning coefficient helps counteract this tendency and balance the sensitivity of
learning for all patterns. The learning rate coefficient used is therefore dynamically adjusted according to

(k) = k (6.4.12)

where is a predefined constant less than but close to unity. As a result of this exponentially decaying
learning rate, learning initially proceeds rapidly, but then the declining rate of learning produces a de-
emphasis of the most recently learned input vectors, which reduces "forgetting" effects and allows the
repeating patterns to be learned evenly.

In the prototype extraction phase, a dynamic net is generated by feeding the above trained net's output back
to the input. A pass is now made over the training set, but this time no learning occurs. The primary
objective of this pass is to classify the vectors in the training data, and extract the prototype discovered by
the network for each cluster. As each vector is presented, an output is generated and is fed back to the input
of the network. This process is repeated iteratively until convergence. When the network settles into a final
state the outputs of the first hidden layer converge to a bipolar binary state. This binary state gives an
intermediate distributed representation (activity pattern) of the particular cluster containing the present
input. The intermediate activity pattern is mapped by the rest of the net into a real-valued activity pattern at
the output layer. This output can be taken as a "nominal" representation of the underlying cluster "center";
i.e., a prototype of the cluster containing the current input. Therefore, the network generates two sets of
prototypes (concepts): Abstract binary-valued concepts and explicit real-valued concepts. The network also
supplies the user with parallel implementations of the two mappings from one concept representation to the
other; the first hidden layer maps the input vectors into their corresponding abstract concepts, while the
second hidden layer and the output layer implement the inverse of this mapping.

Proving the stability of this dynamic multilayer autoassociative clustering net is currently an open problem.
The highly nonlinear nature of this dynamical system makes the analysis difficult. More specifically, it
would be difficult to find an appropriate Liapunov function, if one exists, to prove stability. However,
empirical evidence suggests a high degree of stability when the system is initialized with one of the training
vectors and/or a new vector which is "similar" to any one of the training vectors.

We conclude this section by presenting two simulation results which illustrate the capability of the above
autoassociative clustering net. In the first simulation, the net is used to cluster the 50 analog patterns of
Figure 6.4.2. These patterns are repetitively presented to the net until the slopes in the first hidden layer rise
to 200, according to Equation (6.4.11). Two experiments were performed with = 1.0003 and 1.0005. The
network used had eight units in each of the first and second hidden layers. Figures 6.4.3 (a) and (b) show
the learned prototypes (top row) and their associated cluster members (in associated columns) for = 1.0003
and = 1.0005, respectively ( = 1.0003 required about 350 training cycles to saturate at 200, while = 1.0005
required about 210 cycles). A typical setting for in Equation (6.4.12) is 0.9999.

Figure 6.4.3. Clusters of patterns formed by the multilayer autoassociative clustering net; (a) = 1.0003 and
(b) = 1.0005. The top row shows the learned prototype for each cluster.

It is interesting to note that the choice of network size and parameters and were not optimized for this
particular clustering problem. In fact, these parameters were also found to be appropriate for clustering
motor unit potentials in the electromyogram (EMG) signal (Wang, 1991). It is left to the reader to compare
the clustering results in Figure 6.4.3 to those due to the ART2 net shown in Figure 6.4.2. Note how the level
of controls cluster granularity.

In the second simulation, the autoassociative clustering net (with comparable size and parameters as in the
first simulation) is used to decompose a 10 second recording of an EMG signal obtained from a patient with
a diagnosis of myopathy (a form of muscle disease). A segment of this signal (about 0.8 sec.) is shown in
the top window in Figure 6.4.4. The objective here is to extract prototype motor unit potentials (MUPs)
comprising this signal, and to associate each noisy MUP in the signal with its correct prototype. A
preprocessing algorithm detected 784 candidate MUPs (each MUP pattern is uniformly sampled and is
represented by a 50-dimensional vector centered at the highest peak in the MUP signal) in the 10 second
EMG recording shown. These detected noisy MUPs were used to train the autoassociative clustering net.
The clustering results are shown in Figure 6.4.4. A total of 24 unique prototype MUPs are identified by the
network. The figure shows the prototypes associated with the 11 most significant clusters in small windows;
i.e., a significant number of noisy input MUPs converged to these prototypes. Examples of MUPs from
each cluster are shown to the right of discovered cluster prototypes. This result is superior to those of
existing automated EMG decomposition techniques. For a detailed description and analysis of this problem,
the reader may consult Wang (1991), Hassoun et al. (1994a), and Hassoun et al. (1994b).
Figure 6.4.4. Decomposition of an EMG signal by the autoassociative clustering net. One segment of an
EMG (about 0.8 sec.) is shown in the top window. The extracted MUP waveform prototypes are shown in
small windows. Examples of classified waveforms are shown to the right.

6.5 Summary

This chapter started by introducing the radial basis function (RBF) network as a two layer feedforward net
employing hidden units with locally-tuned response characteristics. This network model is motivated by
biological nervous systems as well as by early results on the statistical and approximation properties of
radial basis functions. The most natural application of RBF networks is in approximating smooth,
continuous multivariate functions of few variables.

RBF networks employ a computationally efficient training method that decouples learning at the hidden
layer from that at the output layer. This method uses a simple clustering (competitive learning) algorithm to
locate hidden unit receptive field centers. It also uses the LMS or delta rule to adjust the weights of the
output units.

These networks have comparable prediction/approximation capabilities to those of backprop networks, but
train by orders of magnitude faster. Another advantage of RBF nets is their lower "false-positive"
classification error rates. However, by a factor of at least one order of magnitude, RBF nets require more
training data and more hidden units compared to backprop nets for achieving the same level of accuracy.

Two major variations to the RBF network were given which lead to improved accuracy (though, at the cost
of reduced training speed). The first variation involved replacing the k-means-clustering-based training
method for locating hidden unit centers by a "soft competition" clustering method. The second variation to
the RBF net substitutes semilocal activation units for the local activation hidden units, and employs gradient
descent-based learning for adjusting all unit parameters.

The CMAC is another example of a localized receptive field net which was considered in this chapter. The
CMAC was originally developed as a model of the cerebellum. This network model shares several of the
features of the RBF net such as fast training and the need for a large number of localized receptive field
hidden units for accurate approximation. It also has common features (and limitations) to those found in
classical perceptrons (e.g., Rosenblatt's perceptron). One distinguishing feature of the CMAC, though, is its
built-in capability of local generalization. The CMAC has been successfully applied in the control of robot
manipulators.

Three unit-allocating adaptive multilayer feedforward networks were also described in this chapter. The
first two networks belong to the class of hyperspherical classifiers. They employ hidden units with adaptive
localized receptive fields. They also have sparse interconnections between the hidden units and the output
units. These networks are easily capable of forming arbitrarily complex decision boundaries, with rapid
training times. In fact, for one of these networks (PTC net), the training time was shown to be of
polynomial complexity.

The third unit allocating network (cascade-correlation net) differs from all previous networks in its ability
to build a deep net of cascaded units and in its ability to utilize more than one type of hidden units, co-
existing in the same network. The motivation behind unit-allocating nets is two fold: (1) The elimination of
the guesswork involved in determining the appropriate number of hidden units (network size) for a given
task, and (2) training speed.
Finally, two examples of dynamic multilayer clustering networks are discussed: The ART1 net and the
autoassociative clustering net. These networks are intended for tasks involving data clustering and
prototype generation. The ART1 net is characterized by its on-line capability of clustering binary patterns,
its stability, and its ability to follow nonstationary input distributions. Generalizations of this network allow
the extension of these desirable characteristics to the clustering of analog patterns. These ART networks are
biologically motivated and were developed as possible models of cognitive phenomena in humans and
animals.

The second clustering net is motivated by "concept forming" cognitive models. It is based on two
interrelated mechanisms: prototype formation and prototype extraction. A slightly modified backprop
training method is employed in a customized autoassociative net of sigmoid units in an attempt to estimate
and encode cluster prototypes in such a way that they become attractors of a dynamical system. This
dynamical system is formed by taking the trained feedforward net and feeding its output back to its input.
Results of simulations involving data clustering with these nets are given. In particular, results of motor unit
potential (MUP) prototype extraction and noisy/distorted MUP categorization (clustering) for a real EMG
signal are presented.

It is hoped that the different network models presented in this chapter, and the motivations for developing
them give the reader an appreciation of the diversity and richness of these networks, and the way their
development has been influenced by biological, cognitive, and/or statistical models.
7. Associative Neural Memories
7.0 Introduction

This chapter is concerned with associative learning and retrieval of information (vector patterns) in neural-
like networks. These networks are usually referred to as associative neural memories (or associative
memories) and they represent one of the most extensively analyzed class of artificial neural networks.
Various associative memory architectures are presented with emphasis on dynamic (recurrent) associative
memory architectures. These memories are treated as nonlinear dynamical systems where information
retrieval is realized as an evolution of the system's state in a high-dimensional state space. Dynamic
associative memories (DAM's) are a class of recurrent artificial neural networks which utilize a
learning/recording algorithm to store vector patterns (usually binary patterns) as stable memory states. The
retrieval of these stored "memories" is accomplished by first initializing the DAM with a noisy or partial
input pattern (key) and then allowing the DAM to perform a collective relaxation search to arrive at the
stored memory which is best associated with the input pattern.

The chapter starts by presenting some simple networks which are capable of functioning as associative
memories and derives the necessary conditions for perfect storage and retrieval of a given set of memories.
The chapter continues by presenting additional associative memory models with particular attention given
to DAM's. The characteristics of high-performance DAM's are defined, and stability, capacity, and retrieval
dynamics of various DAM's are analyzed. Finally, the application of a DAM to the solution of
combinatorial optimization problems is described.

7.1 Basic Associative Neural Memory Models

Several associative neural memory models have been proposed over the last two decades [e.g., Amari,
1972a; Anderson, 1972; Nakano, 1972; Kohonen, 1972 and 1974; Kohonen and Ruohonen, 1973; Hopfield,
1982; Kosko, 1987; Okajima et al., 1987; Kanerva, 1988; Chiueh and Goodman, 1988; Baird, 1990. For an
accessible reference on various associative neural memory models the reader is referred to the edited
volume by Hassoun (1993)]. These memory models can be classified in various ways depending on their
architecture (static versus recurrent), their retrieval mode (synchronous versus asynchronous), the nature of
the stored associations (autoassociative versus heteroassociative), the complexity and capability of the
memory storage/recording algorithm, etc. In this section, a simple static synchronous associative memory is
presented along with appropriate memory storage recipes. Then, this simple associative memory is
extended into a recurrent autoassociative memory by employing feedback. These two basic associative
memories will help define some terminology and serve as a building ground for some additional associative
memory models presented in Section 7.4.

7.1.1 Simple Associative Memories and Their Associated Recording Recipes

One of the earliest associative memory models is the correlation memory (Anderson, 1972; Kohonen, 1972;
Nakano, 1972). This correlation memory consists of a single layer of L non-interacting linear units, with the
lth unit having a weight vector wl Rn. It associates real-valued input column vectors xk Rn with
corresponding real-valued output column vectors yk RL according to the transfer equation

yk = Wxk (7.1.1)

where {xk, yk}, k = 1, 2, ..., m, is a collection of desired associations, and W is an L × n interconnection

matrix whose lth row is given by . A block diagram of the simple associative memory expressed in
Equation (7.1.1) is shown in Figure 7.1.1. Note that this associative memory is characterized by linear
matrix-vector multiplication retrievals. Hence, it is referred to as a linear associative memory (LAM). This
LAM is labeled heteroassociative since yk is different (in encoding and/or dimensionality) from xk. If
yk = xk for all k, then this memory is called autoassociative.

Figure 7.1.1. A block diagram of a simple linear heteroassociative memory.

Correlation Recording Recipe

The correlation memory is a LAM which employs a simple recording/storage recipe for loading the m
associations {xk, yk} into memory. This recording recipe is responsible for synthesizing W and is given by

(7.1.2)

In other words, the interconnection matrix W is simply the correlation matrix of m association pairs.
Another way of expressing Equation (7.1.2) is

(7.1.3)

where Y = [y1 y2 ... ym] and X = [x1 x2 ... xm]. Note that for the autoassociative case where the set of
association pairs {xk, xk} is to be stored, one may still employ Equation (7.1.2) or (7.1.3) with yk replaced
by xk.

One appealing feature of correlation memories is the ease of storing new associations or deleting old ones.
For example, if after recording the m associations {x1, y1} through {xm, ym} it is desired to record one
additional association {xm+1, ym+1}, then one simply updates the current W by adding to it the matrix
ym+1(xm+1)T. Similarly, an already recorded association {xi, yi} may be "erased" by simply subtracting
yi(xi)T from W. However, as is seen next, the price paid for simple correlation recording is that to guarantee
successful retrievals, we must place stringent conditions on the set {xk; k = 1, 2, ..., m} of input vectors.

What are the requirements on the {xk, yk} associations which will guarantee the successful retrieval of all
recorded vectors (memories) yk from their associated "perfect key" xk? Substituting Equation (7.1.2) in
Equation (7.1.1) and assuming that the key xh is one of the xk vectors, we get an expression for the retrieved
pattern as:

(7.1.4)

The second term on the right-hand side of Equation (7.1.4) represents the "cross-talk" between the key xh
and the remaining (m − 1) patterns xk. This term can be reduced to zero if the xk vectors are orthogonal. The
first term on the right-hand side of Equation (7.1.4) is proportional to the desired memory yh, with a
proportionality constant equal to the square of the norm of the key vector xh. Hence, a sufficient condition
for the retrieved memory to be the desired perfect recollection is to have orthonormal vectors xk,
independent of the encoding of the yk (note, though, how the yk affects the cross-talk term if the xk's are not
orthogonal). If nonlinear units replace the linear ones in the correlation LAM, perfect recall of binary
{xk, yk} associations is, in general, possible even when the vectors xk are only pseudo-orthogonal. This can
be seen in the following analysis.

A Simple Nonlinear Associative Memory Model

The assumption of binary-valued associations xk {−1, +1}n and yk {−1, +1}L and the presence of a clipping
nonlinearity F operating componentwise on the vector Wx (i.e., each unit now employs a sgn or sign
activation function) according to

y = F[Wx] (7.1.5)

relaxes some of the constraints imposed by correlation recording of a LAM. Here, W needs to be
synthesized with the requirement that only the sign of the corresponding components of yk and Wxk agree.
Next, consider the normalized correlation recording recipe given by:

(7.1.6)

which automatically normalizes the xk vectors (note that the square of the norm of an n-dimensional bipolar
binary vector is n). Now, if one of the recorded key patterns xh is presented as input, then the following
expression for the retrieved memory pattern can be written:

(7.1.7)

where h represents the cross-talk term. For the ith component of , Equation (7.1.7) gives

from which it can be seen that the condition for perfect recall is given by the requirements

and

for i = 1, 2, ..., L. These requirements are less restrictive than the orthonormality requirement of the xk's in a
LAM.

Uesaka and Ozeki (1972) and later Amari (1977a, 1990) [see also Amari and Yanai (1993)] analyzed the
error correction capability of the above nonlinear correlation associative memory when the memory is
loaded with m independent, uniformly distributed, and random bipolar binary associations {xk, yk}. Based
on this analysis, the relation between the output and input error rates Dout and Din, respectively, in the limit
of large n is given by:
(7.1.8)

where is defined as

(7.1.9)

Here, Din is the normalized Hamming distance, , between a

perfect key vector xk and a noisy version of xk. Din may also be related to the "overlap"

between xk and , , as . Similarly, Dout is the normalized Hamming


distance between yk and the output of the associative memory due to the input . The error rates Din
and Dout may also be viewed as the probability of error of an arbitrary bit in the input and output vectors,
respectively (for an insight into the derivation of Equations (7.1.8) and (7.1.9), the reader is referred to the
analysis in Section 7.2.1). Equation (7.1.8) is plotted in Figure 7.1.2 for several values of the pattern ratio r,

. Note how the ability of the correlation memory to retrieve stored memories from noisy inputs is
reduced as the pattern ratio r approaches and exceeds the value 0.15. For low loading levels (r < 0.15), the
error correction capabilities of memory improves, and for r << 1 the memory can correct up to 50 percent
error in the input patterns with a probability approaching 1. For large n with n >> m it can be shown that a
random set of m key vectors xk become mutually orthogonal with probability approaching unity (Kanter and
Sompolinsky, 1987). Hence, the loading of the m associations {xk, yk}, with arbitrary yk, is assured using
the normalized correlation recording recipe of Equation (7.1.6).

Figure 7.1.2. Output versus input error rates for a correlation memory with clipping (sgn) nonlinearity for

various values of pattern ratio r = . Bipolar binary association vectors are assumed which have
independent, random, and uniformly distributed components. Dotted lines correspond to the approximation
in Equation (7.1.9).
Optimal Linear Associative Memory (OLAM)

The correlation recording recipe does not make optimal use of the LAM interconnection weights. A more
optimal recording technique can be derived which guarantees perfect retrieval of stored memories yk from
inputs xk as long as the set {xk; k = 1, 2, ..., m} is linearly independent (as opposed to the more restrictive
requirement of orthogonality required by the correlation-recorded LAM). This recording technique leads to
the optimal linear associative memory (OLAM) (Kohonen and Ruohonen, 1973) and is considered next.

For perfect storage of m associations {xk, yk}, a LAM's interconnection matrix W must satisfy the matrix
equation given by:

Y = WX (7.1.10)

with Y and X as defined earlier in this section. Equation (7.1.10) can always be solved exactly if all m
vectors xk (columns of X) are linearly independent, which implies that m must be smaller or equal to n. For
the case m = n, the matrix X is square and a unique solution for W in Equation (7.1.10) exists giving:

(7.1.11)

which requires that the matrix inverse X−1 exists; i.e., the set {xk} is linearly independent. Thus, this
solution guarantees the perfect recall of any yk upon the presentation of its associated key xk.

Now, returning to Equation (7.1.10) with the assumption m < n and that the xk are linearly independent, it
can be seen that an exact solution W* may not be unique. In this case, we are free to choose any of the W*
solutions satisfying Equation (7.1.10). In particular, the minimum Euclidean norm solution (Rao and Mitra,
1971. See Problem 7.1.6 for further details) given by:

(7.1.12)

is desirable since it leads to the best error-tolerant (optimal) LAM (Kohonen, 1984). Equation (7.1.12) will

be referred to as the "projection" recording recipe since the matrix-vector product xk


transforms the kth stored vector xk into the kth column of the m × m identity matrix. Note that for an
arbitrary Y, if the set {xk} is orthonormal, then XT X = I and Equation (7.1.12) reduces to the correlation
recording recipe of Equation (7.1.3). An iterative version of the projection recording recipe in Equation
(7.1.12) exists (Kohonen, 1984) based on Greville's theorem (Albert, 1972) which leads to the exact weight
matrix W after exactly one presentation of each one of the m vectors xk. This method is convenient since a
new association can be learned (or an old association can be deleted) in a single update step without
involving other earlier-learned memories. Other adaptive versions of Equation(7.1.12) can be found in
Hassoun (1993).

OLAM Error Correction Capabilities

The error correcting capabilities of OLAMs [with the projection recording in Equation (7.1.12)] has been
analyzed by Kohonen (1984) and Casasent and Telfer (1987), among others, for the case of real-valued

associations. The following is a brief account of the key points of such analysis. Let , where
xk is the key of one of the m stored associations {xk, yk}. Also, assume xk Rn and yk RL. The vector n is a

noise vector of zero-mean with a covariance matrix . Denote the variance of the input and output

noise by and , respectively. That is, and where

is the ith component of the retrieved vector . Here, the expectation is taken over the
elements of the argument vector, not over k (note that for zero-mean data, which is the case for
the input and output noise, since the LAM retrieval operation is linear). For an autoassociative OLAM

(yk = xk, for all k) with a linearly independent set {xk}, the error correction measure is given by
(Kohonen, 1984)

(7.1.13)

Thus, for linearly independent key vectors (requiring m n) the OLAM always reduces the input noise (or in
the worst case when m = n, the input noise is not amplified). Also, note that the smaller m is relative to n,
the better the noise suppression capability of the OLAM.

For the heteroassociative case (yk xk), it can be shown (Casasent and Telfer, 1987) that the OLAM error
correction is given by:

(7.1.14)

where yij is the ijth element of matrix Y, and Tr() is the trace operator (which simply sums the diagonal
elements of the argument matrix). The first expected value operator is taken over all elements of Y, also
both expectation operators are taken over the entire ensemble of possible recollection and key vectors,
respectively. Also, it is assumed that yk and xk are not correlated and that the recollection vectors yk have

equal energy E(y), defined as . Equation (7.1.14) shows that the error correction
in a heteroassociative OLAM depends on the choice of not only the key vectors xk but also on that of the
recollection vectors. Poor performance is to be expected when the matrix XTX is nearly singular [which

leads to a large value for Tr(XTX)−1]. The reader should be warned, though, that the error correction
measure is not suitable for heteroassociation OLAM's because it is not normalized against variation in
key/recollection vector energies; one can artificially reduce the value of this measure by merely reducing

the energy of the recollection vectors yk (i.e., reducing ). The reader is referred to Problem 7.1.11 for
a more appropriate error correction measure for heteroassociative LAM's.

Error correction characteristics of nonlinear associative memories whose transfer characteristics are
described by Equation (7.1.5) and employing projection recording with uniformly distributed random
bipolar binary key/recollection vectors have also been analyzed. The reader is referred to the theoretical
analysis by Amari (1977a) and the empirical analysis by Stiles and Denq (1987) for details.

Strategies for Improving Memory Recording

The encoding of the xk's, their number relative to their dimension, and the recording recipe employed highly
affect the performance of an associative memory. Assuming that an associative memory architecture and a
suitable recording recipe are identified, how can one improve associative retrieval and memory loading
capacity? There are various strategies for enhancing associative memory performance. In the following, two
example strategies are presented.
One strategy involves the use of a multiple training method (Wang et al., 1990) which emphasizes those
unsuccessfully stored associations by introducing them to the weight matrix W through multiple recording
passes until all associations are recorded. This strategy is potentially useful when correlation recording is
employed. In this case, the interconnection matrix is equivalent to a weighted correlation matrix with
different weights on different association pairs. Here, it is assumed that there exists a weighted correlation
matrix which can store all desired association pairs.

User defined specialized associations may also be utilized in a strategy for improving associative memory
performance. For instance, one way to enhance the error correction capability of an associative memory is
to augment the set of association pairs {xk, yk} to be stored with a collection of associations of the form {
, yk} where represents a noisy version of xk. Here, several instances of noisy key vector versions for
each desired association pair may be added. This strategy arises naturally in training pattern classifiers and
is useful in enhancing the robustness of associative retrieval. The addition of specialized association pairs
may also be employed when specific associations must be introduced (or eliminated). One possibility of
employing this strategy is when a "default memory" is required. For example, for associations encoded such
that sparse input vectors si have low information content, augmenting the original set of associations with
associations of the form {si, 0} during recording leads to the creation of a default "no-decision" memory 0.
This memory is retrieved when highly corrupted noisy input key vectors are input to the associative
memory, thus preventing these undesirable inputs from causing the associative memory to retrieve the
"wrong" memories.

The strategy of adding specialized associations increases the number of associations, m, to be stored and
may result in m > n. Therefore, this strategy is not well suited for correlation LAMs. Here, one may employ
a recording technique which synthesizes W such that the association error is minimized over all m
association pairs. Mathematically speaking, the desired solution is the one which minimizes the SSE
criterion function J(W) given by:

(7.1.15)

where and are the ith components of the desired memory yk and that of the estimated one,
respectively. Now, by setting the gradient of J(W) to zero and solving for W, one arrives at the following
memory storage recipe:

† (7.1.16)

where X† is the psuedo-inverse of matrix X (Penrose, 1955). This solution assumes that the inverse of the
matrix XXT exists.

7.1.2 Dynamic Associative Memories (DAMs)

Associative memory performance can be improved by utilizing more powerful architectures than the simple
ones considered above. As an example, consider the autoassociative version of the single layer associative
memory employing units with the sign activation function and whose transfer characteristics are given by
Equation (7.1.5). Now assume that this memory is capable of associative retrieval of a set of m bipolar
binary memories {xk}. Upon the presentation of a key which is a noisy version of one of the stored
memory vectors xk, the associative memory retrieves (in a single pass) an output y which is closer to stored
memory xk than . In general, only a fraction of the noise (error) in the input vector is corrected in the
first pass (presentation). Intuitively, we may proceed by taking the output y and feed it back as an input to
the associative memory hoping that a second pass would eliminate more of the input noise. This process
could continue with more passes until we eliminate all errors and arrive at a final output y equal to xk. The
retrieval procedure just described amounts to constructing a recurrent associative memory with the
synchronous (parallel) dynamics given by
x(t + 1) = F[W x(t)] (7.1.17)

where t = 0, 1, 2, 3, ... and x(0) is the initial state of the dynamical system which is set equal to the noisy
key . For proper associative retrieval, the set of memories {xk} must correspond to stable states
(attractors) of the dynamical system in Equation (7.1.17). In this case, we should synthesize W (which is
the set of all free parameters wij of the dynamical system in this simple case) so that starting from any initial
state x(0), the dynamical associative memory converges to the "closest" memory state xk. Note that a
necessary requirement for such convergent dynamics is system stability.

In the following, various variations of the above dynamical associative memory are presented and their
stability is analyzed.

Continuous-Time Continuous-State Model

Consider the nonlinear active electronic circuit shown in Figure 7.1.3. This circuit consists of resistors,
capacitors, ideal current sources, and identical nonlinear amplifiers. Each amplifier provides an output
voltage xi given by f(ui) where ui is the input voltage and f is a differentiable monotonically increasing
nonlinear activation function, such as tanh(ui). Each amplifier is also assumed to provide an inverting
terminal for producing output . The resistor Rij connects the output voltage xj (or −xj) of the jth

amplifier to the input of the ith amplifier. Since, as will be seen later, the play the role of
interconnection weights, positive as well as "negative" resistors are required. Connecting a resistor Rij to −xi
helps avoid the complication of actually realizing negative resistive elements in the circuit. The R and C are
positive quantities and are assumed equal for all n amplifiers. Finally, the current Ii represents an external
input signal (or bias) to amplifier i.

The circuit in Figure 7.1.3 is known as the Hopfield net. It can be thought of as a single layer neural net of
continuous nonlinear units with feedback. The ith unit in this circuit is shown in Figure 7.1.4. The
dynamical equations describing the evolution of the ith state xi, i = 1, 2, ..., n, in the Hopfield net can be
derived by applying Kirchoff's current law to the input node of the ith amplifier as

(7.1.18)

which can also be written as

(7.1.19)
Figure 7.1.3. Circuit diagram for an electronic dynamic associative memory.

Figure 7.1.4. Circuit diagram for the ith unit of the associative memory in Figure 7.1.3.

where and (or if the inverting output of unit j is connected to


unit i). The above Hopfield net can be considered as a special case of a more general dynamical network
developed and studied by (Cohen and Grossberg (1983) which has an ith state dynamics expressed by:
(7.1.20)

The overall dynamics of the Hopfield net can be described in compact matrix form as

(7.1.21)

where C = CI (I is the identity matrix), = diag(1, 2, ..., n), x = F(u) = [f(u1) f(u2) ... f(un)]T, = [I1 I2 ... In]T,
and W is an interconnection matrix defined as

The equilibria of the dynamics in Equation (7.1.21) are determined by setting , giving

u = Wx + = WF(u) + (7.1.22)

A sufficient condition for the Hopfield net to be stable is that the interconnection matrix W be symmetric
((Hopfield, 1984). Furthermore, Hopfield showed that the stable states of the network are the local minima
of the bounded computational energy function (Liapunov function)

(7.1.23)

where x = [x1 x2 ... xn]T is the net's output state, and is the inverse of the activation function
xj = f(uj). Note that the value of the right-most term in Equation (7.1.23) depends on the specific shape of
the nonlinear activation function f. For high gain approaching infinity, f(uj) approaches the sign function;
i.e., the amplifiers in the Hopfield net become threshold elements. In this case, the computational energy
function becomes approximately the quadratic function

(7.1.24)

It has been shown ((Hopfield, 1984) that the only stable states of the high-gain, continuous-time,
continuous-state system in Equation (7.1.21) are the corners of the hypercube; i.e., the local minima of
Equation (7.1.24) are states x* {−1, +1}n. For large but finite amplifier gains, the third term in Equation
(7.1.23) begins to contribute. The sigmoidal nature of f(u) leads to a large positive contribution near
hypercube boundaries, but negligible contribution far from the boundaries. This causes a slight drift of the
stable states toward the interior of the hypercube.
Another way of looking at the Hopfield net is as a gradient system which searches for local minima of the
energy function E(x) defined in Equation (7.1.23). To see this, simply take the gradient of E with respect to
the state x and compare to Equation (7.1.21). Hence, by equating terms, we have the following gradient
system:

(7.1.25)

where = diag( , , ..., ). Now, the gradient system in Equation (7.1.25) converges asymptotically to
an equilibrium state which is a local minimum or a saddle point of the energy E ((Hirsch and Smale, 1974)
(fortunately, the unavoidable noise in practical applications prevents the system from staying at the saddle
points and convergence to a local minimum is achieved). To see this, we first note that the equilibria of the
system described by Equation (7.1.25) correspond to local minima (or maxima or points of inflection) of

E(x) since means . For each isolated local minimum x*, there exists a neighborhood
over which the candidate function V(x) = E(x) − E(x*) has continuous first partial derivatives and is strictly
positive except at x* where V(x*) = 0. Additionally,

(7.1.26)

is always negative [since is always positive because of the monotonically nondecreasing nature of the
relation xj = f(uj)] or zero at x*. Hence V is a Liapunov function, and x* is asymptotically stable.

The operation of the Hopfield net as an autoassociative memory is straight forward; given a set of memories
{xk}, the interconnection matrix W is encoded such that the xk's become local minima of the Hopfield net's
energy function E(x). Then, when the net is initialized with a noisy key , its output state evolves along
the negative gradient of E(x) until it reaches the closest local minima which, hopefully, is one of the
fundamental memories xk's. In general, however, E(x) will have additional local minima other than the
desired ones encoded in W. These additional undesirable stable states are referred to as spurious memories.

When used as a DAM, the Hopfield net is operated with very high activation function gains and with
binary-valued stored memories. The synthesis of W can be done according to the correlation recording
recipe of Equation (7.1.3) or the more optimal recipe in Equation (7.1.12). These recording recipes lead to
symmetrical W's (since autoassociative operation is assumed; i.e., yk = xk for all k) which guarantees the
stability of retrievals. Note that the external bias may be eliminated in such DAMs. The elimination of bias,
the symmetric W, and the use of high gain amplifiers in such DAMs lead to the truncated energy function:

(7.1.27)

Additional properties of these DAMs are explored in Problems 7.1.13 through 7.1.15.

Discrete-Time Continuous-State Model


An alternative model for retrieving the stable states (attractors) can be derived by employing the relaxation
method (also known as the fixed point method) for iteratively solving Equation (7.1.22) ((Cichocki and
Unbehauen, 1993). Here, an initial guess x(0) for an attractor state is used as the initial search point in the
relaxation search. Starting from Equation (7.1.22) with = I (without loss of generality) and recalling that

, we may write the relaxation equation

; k = 0, 1, 2, ... (7.1.28)

or, by solving for x(k+1),

(7.1.29)

Equation (7.1.29) describes the dynamics of a discrete-time continuous-state synchronously updated DAM.
For , Equation (7.1.29) is identical to Equation (7.1.17) which was intuitively derived, except that
the unit activations in the above relaxation model are of sigmoid-type as opposed to the threshold-type (sgn)
assumed in Equation (7.1.17). Also, when the unit activations are piece-wise linear, Equation (7.1.29) leads
to a special case of the BSB model, discussed in Section 7.4.1. The parallel update nature of this DAM is
appealing since it leads to faster convergence (in software simulations) and easier hardware
implementations as compared to the continuous-time Hopfield model.

Most current implementations of continuous-time neural networks are done using computer simulations
which are necessarily discrete-time implementations. However, the stability results obtained earlier for the
continuous-time DAM do not necessarily hold for the discrete-time versions. So it is important to have a
rigorous discrete-time analysis of the stability of the dynamics in Equation (7.1.29). (Marcus and
Westervelt (1989) showed that the function

(7.1.30)

where

(7.1.31)

is a Liapunov function for the DAM in Equation (7.1.29) when W is symmetric and the activation gain
[e.g., assume f(x) = tanh(x)] satisfy the condition

(7.1.32)

Here, min is the smallest eigenvalue of the interconnection matrix W. Equation (7.1.30) is identical to
Equation (7.1.23) since it is assumed that j = 1, j = 1, 2, ..., n. If W has no negative eigenvalues, then
Equation (7.1.32) is satisfied by any value of , since > 0. On the other hand, if W has one or more negative
eigenvalues, then min, the most negative of them, places an upper limit on the gain for stability.

To prove that E in Equation (7.1.30) is a Liapunov function when Equation (7.1.32) is satisfied, consider
the change in E between two discrete time steps:
(7.1.33)

Using the update equation for xi from Equation (7.1.29) and the symmetry property of W, Equation (7.1.33)
becomes

(7.1.34)

when xi(k) = xi(k+1) − xi(k). The last term in Equation (7.1.34) is related to xi(k) by the inequality ((Marcus
and Westervelt, 1989)

(7.1.35)

where or, by using Equation (7.1.31), . Combining Equations (7.1.34)


and (7.1.35) leads to

(7.1.36)

where x(k) = x(k + 1) − x(k), and ij = 1 for i = j and ij = 0 otherwise. Now, if the matrix is positive definite,
then E(k) 0 (equality holds only when x(k) = 0 which implies that the network has reached an attractor).
The requirement that be positive definite is satisfied by the inequality of Equation (7.1.32). This result,
combined with the fact that E(k) is bounded, shows that the function in Equation (7.1.30) is a Liapunov
function for the DAM in Equation (7.1.29) and thus the DAM is stable. If, on the other hand, the inequality
of Equation (7.1.32) is violated, then it can be shown that the DAM can develop period-2 limit cycles
((Marcus and Westervelt, 1989).

Discrete-Time Discrete-State Model

Starting with the dynamical system in Equation (7.1.29) and replacing the continuous activation function by
the sign function, one arrives at the discrete-time discrete-state parallel (synchronous) updated DAM model
where all states xi(k), i = 1, 2, ..., n, are updated simultaneously according to

(7.1.37)
Another version of this DAM is one which operates in a serial (asynchronous) mode. It assumes the same
dynamics as Equation (7.1.37) for the ith unit, but only one unit updates its state at a given time. The unit
which updates its state is chosen randomly and independently of the times of firing of the remaining (n − 1)
units in the DAM. This asynchronously updated discrete-state DAM is commonly known as the discrete
Hopfield net, which was originally proposed and analyzed by (Hopfield (1982). In its original form, this net
was proposed as an associative memory which employed the correlation recording recipe for memory
storage.

It can be shown (see Problem 7.1.17) that the discrete Hopfield net with a symmetric interconnection matrix
(wij = wji) and with nonnegative diagonal elements (wii 0) is stable with the same Liapunov function as that
of a continuous-time Hopfield net in the limit of high amplifier gain; i.e., it has the Liapunov function in
Equation (7.1.24). Hopfield (1984) showed that both nets (discrete and continuous nets with the above
assumptions) have identical energy maxima and minima. This implies that there is a one-to-one
correspondence between the memories of the two models. Also, since the two models may be viewed as
minimizing the same energy function E, one would expect that the macroscopic behavior of the two models
are very similar; that is, both models will perform similar memory retrievals.

7.2 DAM Capacity and Retrieval Dynamics

In this section, the capacity and retrieval characteristics of the autoassociative DAMs introduced in the
previous section are analyzed. Correlation-recorded DAMs are considered first, followed by projection-
recorded DAMs.

7.2.1 Correlation DAMs

DAM capacity is a measure of the ability of a DAM to store a set of m unbiased random binary patterns
(that is, the vector components are independent random variables taking values 1 or with probability ) and
at the same time be capable of associative recall (error correction). A commonly used capacity measure,
known as "absolute" capacity, takes the form of an upper bound on the pattern ratio in the limit such that all
stored memories are equilibrium points, with a probability approaching one. This capacity measure, though,
does not assume any error correction behavior; i.e., it does not require that the fundamental memories xk be
attractors with associated basins of attraction. Another capacity measure, known as "relative" capacity, has
been proposed which is an upper bound on such that the fundamental memories or their "approximate"
versions are attractors (stable equilibria).

It has been shown (Amari, 1977a; Hopfield, 1982; Amit et al., 1985) that if most of the memories in a
correlation-recorded discrete Hopfield DAM, with wii = 0, are to be remembered approximately (i.e.,
nonperfect retrieval is allowed), then must not exceed 0.15. This value is the relative capacity of the DAM.
Another result on the capacity of this DAM for the case of error-free memory recall by one-pass parallel
convergence is (in probability) given by the absolute capacity (Weisbuch and Fogelman-Soulié, 1985;
McEliece et al., 1987; Amari and Maginu, 1988; Newman, 1988), expressed as the limit

(7.2.1)

Equation (7.2.1) indicates that the absolute capacity approaches zero as n approaches infinity! Thus, the
correlation-recorded discrete Hopfield net is an inefficient DAM model. The absolute capacity result in
Equation (7.2.1) is derived below.

Assuming yk = xk (autoassociative case) in Equation (7.1.7) and wii = 0, i = 1, 2, ..., n, then by direct recall
from initial input x(0) = xh with xh {xk}, the ith bit of the retrieved state x(1) is given by

(7.2.2)
Consider the quantity Ci(0) = −xi(0)i(0). Now, if Ci(0) is negative then xi(0) and i(0) have the same sign and
the cross-talk term i(0) does no harm. On the other hand, if Ci(0) is positive and larger than 1, then the one-
pass retrieved bit xi(1) is in error. Next, assume that the stored memories are random, with equal probability
for and for , independently for each k and i. Hence, for large m and n, the Ci(0) term is approximately
distributed according to a normal distribution N(, 2) = N(0, ). To see this, we first note that , and
equivalently Ci(0), is times the sum of (n − 1)(m − 1) independent, and uniformly distributed random bipolar
binary numbers, and thus it has a binomial distribution with zero mean and variance . Thus, by virtue of the
Central Limit Theorem, Ci(0) approaches N(0, ) asymptotically as m and n become large (Mosteller et al.,
1970). Therefore, we may compute the probability that xi(1) is in error, Perror = Prob(Ci(0) > 1), by
integrating N(0, ) from 1 to , giving

(7.2.3)

where is the error function. Note that Equation (7.2.3) is a special case of Equation (7.1.8) where Din in
Equation (7.1.8) is set to zero. Now, using the fact that for a random variable x distributed according to
N(0, 2), , and if the ith bit xi(1) in Equation (7.2.2) is required to be retrievable with error probability less
than 0.0014, then the condition 3 < 1 or must be satisfied. Therefore, m < 0.111n is required for Perror
0.0014. Similarly, Equation (7.2.3) can be solved for the requirement Perror 0.005 which gives .

On the other hand, if all memories xk are required to be equilibria of the DAM with a probability close to 1,
say 0.99, then an upper bound on can be derived by requiring that all bits of all memories xk be retrievable
with less than a 1 percent error; i.e., (1 − Perror)mn > 0.99. Employing the binomial expansion, this inequality
may be approximated as Perror < . Noting that this stringent error correction requirement necessitates small
values, Equation (7.2.3) can be approximated using the asymptotic (x ) expansion . This approximation can
then be used to write the inequality Perror < as

(7.2.4)

Now, taking the natural logarithm (ln) of both sides of the inequality in Equation (7.2.4) and eliminating all
constants and the ln factor (since they are dominated by higher-order terms) results in the bound

(7.2.5)

By noting that ln mn < ln n2 (since n > m), a more stringent requirement on in Equation (7.2.5) becomes

(7.2.6)

which represents the absolute capacity of a zero-diagonal correlation-recorded DAM. The effects of a non-
zero diagonal on DAM capacity are treated in Section 7.4.3.

Another, more useful, DAM capacity measure gives a bound on in terms of error correction and memory
size (Weisbuch and Fogelman-Soulié, 1985; McEliece et al., 1987). According to this capacity measure, a
correlation-recorded discrete Hopfield DAM must have its pattern ration, , satisfy

(7.2.7)

in order that error-free one-pass retrieval of a fundamental memory (say xk) from random key patterns lying
inside the Hamming hypersphere (centered at xk) of radius n ( < ) is achieved with probability approaching
1. Here, defines the radius of attraction of a fundamental memory. In other words, is the largest normalized
Hamming distance from a fundamental memory within which almost all of the initial states reach this
fundamental memory, in one-pass. The inequality in Equation (7.2.7) can be derived by starting with an
equation similar to Equation (7.2.2) with one difference that the input x(0) is not one of the stored
memories. Rather, a random input x(0) is assumed that has an overlap with one of the stored memories, say
xh. Here, one can readily show that the ith retrieved bit xi(1) is in error if and only if Ci(0) > 1 − 2. The error
probability for the ith bit is then given by Equation (7.2.3) with the lower limit on the integral replaced by
1 − 2. This leads to which, by using a similar derivation to the one leading to Equation (7.2.6), leads us to
Equation (7.2.7).
The capacity analysis leading to Equations (7.2.6) and (7.2.7) assumed a single parallel retrieval iteration,
starting from x(0) and retrieving x(1). This same analysis can not be applied if one starts the DAM at x(k),
k =1, 2, ...; i.e., Equations (7.2.6) and (7.2.7) are not valid for the second or higher DAM iterations. In this
case, the analysis is more complicated due to the fact that x(k) becomes correlated with the stored memory
vectors, and hence the statistical properties of the noise term i(k) in Equation (7.2.2) are more difficult to
determine (in fact, such correlations depend on the whole history of x(k − T), T = 0, 1, 2, ...). Amari and
Maginu (1988) (see also Amari and Yanai, 1993) analyzed the transient dynamical behavior of memory
recall under the assumption of a normally distributed i(k) [or Ci(k)] with mean zero and variance 2(k). This
variance was calculated by taking the direct correlations up to two steps between the bits of the stored
memories and those in x(k). Under these assumptions, the relative capacity was found to be equal to 0.16.
This theoretical value is in good agreement with early simulations reported by Hopfield (1982) and with the
theoretical value of 0.14 reported by Amit et al. (1985) using a method known as the replica method.

Komlós and Paturi (1988) showed that if Equation (7.2.6) is satisfied, then the DAM is capable of error
correction when multiple-pass retrievals are considered. In other words, they showed that each of the
fundamental memories is an attractor with a basin of attraction surrounding it. They also showed that once
initialized inside one of these basins of attraction, the state converges (in probability) to the basin's attractor
in order ln(ln n) parallel steps. Burshtien (1993) took these results a step further by showing that the radius
of the basin of attraction for each fundamental memory is , independent of the retrieval mode (serial or
parallel). He also showed that a relatively small number of parallel iterations, asymptotically independent of
n, is required to recover a fundamental memory even when is very close to (e.g., for = 0.499, at most 20
iterations are required).

When initialized with a key input x(0) lying outside the basins of attraction of fundamental memories, the
discrete Hopfield DAM converges to one of an exponential (in n) number of spurious memories. These
memories are linear combinations of fundamental memories (Amit et al., 1985; Gardner, 1986; Komlos and
Paturi, 1988). Thus, a large number of undesirable spurious states which compete with fundamental
memories for basins of attraction "volumes" are intrinsic to correlation-recorded discrete Hopfield DAMs.

Next, the capacity and retrieval characteristics of two analog (continuous-state) correlation-recorded DAMs
(one continuous-time and the other discrete-time parallel updated) based on models introduced in the
previous section are analyzed. The first analog DAM considered here will be referred to as the continuous-
time DAM in the remainder of this section. It is obtained from Equation (7.1.21) by setting C = = I and
= 0. Its interconnection matrix W is defined by the autocorrelation version of Equation (7.1.6) with zero-
diagonal. The second analog DAM is obtained from Equation (7.1.29) with = 0. It employs the normalized
correlation recording recipe for W with zero-diagonal as for the continuous-time DAM. This later analog
DAM will be referred to as the discrete-time DAM in this section. Both DAMs employ the tangent
hyperbolic activation function with gain .

The dynamics of these two DAMs have been studied in terms of the gain and the pattern ratio for unbiased
random bipolar stored memories (Amit et al., 1985, 1987; Marcus et al., 1990; Shiino and Fukai, 1990;
Kühn et al., 1991; Waugh et al., 1993). Figure 7.2.1 shows two analytically derived phase diagrams for the
continuous-time and discrete-time DAMs, valid in the limit of large n (Marcus et al., 1990; Waugh et al.,
1993). These diagrams indicate the type of attractors as a function of activation function gain and pattern
ratio . For the continuous-time DAM, the diagram (Figure 7.2.1a) shows three regions labeled origin, spin
glass, and recall. In the recall region, the DAM is capable of either exact or approximate associative
retrieval of stored memories. In other words, a set of 2m attractors exist each having a large overlap (inner
product) with a stored pattern or its inverse (the stability of the inverse of a stored memory is explored in
Problem 7.1.13). This region also contains attractors corresponding to spurious states that have negligible
overlaps with the stored memories (or their inverse). In the spin glass region (named so because of the
similarity to dynamical behavior of simple models of magnetic material (spin glasses) in statistical physics),
the desired memories are no longer attractors; hence, the only attractors are spurious states. Finally, in the
origin region, the DAM has the single attractor state x = 0.
(a)

(b)

Figure 7.2.1. Phase diagrams for correlation-recorded analog DAMs with activation function for (a)
continuous-time and (b) parallel discrete-time updating. (Adapted from C. M. Marcus et al., 1990, with
permission of the American Physical Society.)

The boundary separating the recall and spin glass regions determines the relative capacity of the DAM. For
the continuous-time DAM, and in the limit of high gain ( > 10), this boundary asymptotes at which is
essentially the relative capacity of the correlation-recorded discrete Hopfield DAM analyzed earlier in this
section. This result supports the arguments presented at the end of Section 7.1.2 on the equivalence of the
macroscopic dynamics of the discrete Hopfield net and the continuous-time, high-gain Hopfield net.

The phase diagram for the discrete-time DAM is identical to the one for the continuous-time DAM except
for the presence of a fourth region marked oscillation in Figure 7.2.1 b. In this region, the stability condition
in Equation (7.1.32) is violated (note that the zero-diagonal autocorrelation weight matrix can have negative
eigenvalues. See Problem 7.2.1). Associative retrieval and spurious states may still exist in this region
(especially if is small), but the DAM can also become trapped in period-2 limit cycles (oscillations). In both
DAMs, error correction capabilities cease to exist at activation gains close to or smaller than 1 even as
approaches zero.

The boundary separating the recall and spin glass regions has been computed by methods that combine the
Liapunov function approach with the statistical mechanics of disordered systems. The boundary between
the spin glass and origin regions in Figure 7.2.1 (a) and (b) is given by the expression

(7.2.8)

This expression can be derived by performing local stability analysis about the equilibrium point x* = 0 that
defines the origin region. This method is explored in Problems 7.2.2 and 7.2.3 for the continuous-time and
discrete-time DAMs, respectively.

The associative retrieval capability of the above analog DAMs can vary considerably even within the recall
region. It has been shown analytically and empirically that the basin of attraction of fundamental memories
increases substantially as the activation function gain decreases with fixed pattern ration , for values of .
Waugh et al. (1991, 1993) showed that the number of local minima (and thus spurious states) in the
Liapunov function of the above DAMs increases exponentially as exp(ng) where . Here, g is monotonically
increasing in . Therefore, even a small decrease in can lead to substantial reduction in the number of local
minima, especially the shallow ones which corresponds to spurious memories. The reason behind the
improved DAM performance as gain decreases is that the Liapunov function becomes smoother, so that
shallow local minima are eliminated. Since the fundamental memories tend to lie in wide, deep basins,
essentially all of the local minima eliminated correspond to spurious memories. This phenomenon is termed
deterministic annealing and it is reminiscent of what happens as temperature increases in simulated
annealing (the reader is referred to the next chapter for a discussion of annealing methods in the context of
neural networks).
7.2.2 Projection DAMs

The capacity and performance of autoassociative correlation-recorded DAMs can be greatly improved if
projection recording is used to store the desired memory vectors (recall Equation (7.1.12), with Y = X).
Here, any set of memories can be memorized without errors as long as they are linearly independent (note
that linear independence restricts m to be less than or equal to n). In particular, projection DAMs are well
suited for memorizing unbiased random vectors xk {−1, +1}n since it can be shown that the probability of m
(m < n) of these vectors to be linearly independent approaches 1 in the limit of large n (Komlós, 1967). In
the following, the retrieval properties of projection-recorded discrete-time DAMs are analyzed. More
specifically, the two versions of discrete-state DAMs, serially updated and parallel updated, and the parallel
updated continuous-state DAM are discussed. For the remainder of this section, these three DAMs will be
referred to as the serial binary, parallel binary, and parallel analog projection DAMs, respectively. The
following analysis assumes the usual unbiased random bipolar binary memory vectors.

The relation between the radius of attraction of fundamental memories and the pattern ratio is a desirable
measure of DAM retrieval/error correction characteristics. For correlation-recorded binary DAMs, such a
relation has been derived analytically for single-pass retrieval and is given by Equation (7.2.7). On the other
hand, deriving similar relations for multiple-pass retrievals and/or more complex recording recipes (such as
projection recording) is a much more difficult problem. In such cases, numerical simulations with large n
values (typically equal to several hundred) are a viable tool (e.g., see Kanter and Sompolinsky, 1987; Amari
and Maginu, 1988). Figure 7.2.2, reported byKanter and Sompolinsky (1987), shows plots of the radius of
attraction of fundamental memories versus pattern ration generated by numerical simulation for serial
binary and parallel binary projections DAMs (multiple-pass retrieval is assumed).

Figure 7.2.2. Measurements of as a function of by computer simulation for projection-recorded binary


DAMs. The lines are guides to the eye. The typical size of the statistical fluctuations is indicated. Lines
tagged by a refer to a zero-diagonal projection matrix W. Lines tagged by b refer to a standard projection
matrix. Solid lines refer to serial update with a specific order of updating as described in the text. Dashed
lines refer to parallel update. (Adapted from I. Kanter and H. Sompolinsky, 1987, with permission of the
American Physical Society.)

There are two pairs of plots, labeled a and b, in this figure. The pair labeled a corresponds to the case of a
zero-diagonal projection weight matrix W. Whereas pair b corresponds to the case of a projection W matrix
with preserved diagonal. The solid and dashed lines in Figure 7.2.2 represent serial and parallel retrievals,
respectively.

According to these simulations, forcing the self-coupling terms wii in the diagonal of W to zero has a drastic
effect on the size of the basin of attraction. Note that the error correction capability of fundamental
memories ceases as approaches and then exceeds for both serial and parallel DAMs for the nonzero-
diagonal case (Problem 7.2.8 explores this phenomenon). On the other hand, the corresponding DAMs with
zero-diagonal projection matrix continue to have substantial error correction capabilities even after exceed ,
but ultimately lose these capabilities at . A common feature of these discrete projection DAMs is the
monotonic decrease of from 0.5 to 0 as increases from 0 to 1. Empirical results show that, inside the basin
of attraction of stable states, the flows to the states are fast with a maximum of 10 to 20 parallel iterations
(corresponding to starting at the edge of the basin). These results are similar to the theoretical ones reported
for parallel updated correlation-recorded DAMs (Burshtien, 1993).
For the above parallel updated zero-diagonal projection DAM, simulations show that in almost all cases
where the retrieval did not result in a fundamental memory, it resulted in a limit cycle of period two as
opposed to spurious memories. For the preserved-diagonal projection DAM, simulations with finite n and
show that no oscillations exist (Youssef and Hassoun, 1989). (The result of Problem 7.2.10 taken in the
limit of can serve as an analytical proof for the nonexistence of oscillations for the case of large n).

The zero-diagonal serial projection DAM has the best performance depicted by the solid line a in Figure
7.2.2. In this case an approximate linear relation between and can be deduced from this figure as

(7.2.9)

Here, the serial update strategy used employs an updating such that the initial updates are more likely to
reduce the Hamming distance (i.e., increase the overlap) between the DAM's state and the closest
fundamental memory rather than increase it. In the simulations, the initial state x(0) tested had its first n bits
identical to one of the stored vectors, say x1, while the remaining bits were chosen randomly. Thus the
region represents the bits where errors are more likely to occur. The serial update strategy used above
allowed the units corresponding to the initially random bits {xi(0), } to update their states before the ones
having the correct match with x1 (i.e., units corresponding to {xi(0), }). However, in practical applications,
this update strategy may not be applicable (unless we have partial input keys that match a segment of one of
the stored memories such as a partial image) and hence, a standard serial update strategy (e.g., updating the
n states in some random or unbiased deterministic order) may be employed. Such standard serial updating
leads to reduced error correction behavior compared to the particular serial update employed in the above
simulations. The performance, though, would still be better than that of a parallel updated DAM. Spurious
memories do exist in the above projection DAMs. These spurious states are mixtures of the fundamental
memories (just as in the correlation-recorded discrete Hopfield DAM) at very small values. Above 0.1,
mixture states disappear. Instead, most of the spurious states have very little overlap with individual
fundamental memories.

Lastly, consider the parallel analog projection DAM with zero-diagonal interconnection matrix W. This
DAM has the phase diagram shown in Figure 7.2.3 showing origin, recall, and oscillation phases, but no
spin glass phase (Marcus et al., 1990). The absence of the spin glass phase does not imply that this DAM
does not have spurious memories; just as for the correlation-recorded discrete Hopfield DAM, there are
many spurious memories within the recall and oscillation regions which have small overlap with
fundamental memories, especially for large . However, there is no region where only spurious memories
exist. Note also that in the oscillation region, all fundamental memories exist as stable equilibria states with
basins of attraction defined around each of them. The radius of this basin decreases (monotonically) as
increases until ultimately all such memories lose their basins of attraction.

Figure 7.2.3. Phase diagram for the parallel updated analog projection DAM (with zero-diagonal W).
(Adapted from C. M. Marcus et al., 1990, with permission of the American Physical Society.)

According to Equation (7.1.32), oscillations are possible in the dynamics of the present analog DAM when ,
where min is the minimal eigenvalue of the interconnection matrix W. It can be shown (Kanter and
Sompolinsky, 1987) that a zero-diagonal projection matrix which stores m unbiased random memory
vectors xk {−1, +1}n has the extremal eigenvalues and . Therefore, the oscillation region in the phase
diagram of Figure 7.2.3 is defined by . Also, by following an analysis similar to the one outlined in Problem
7.2.3, it can be shown that the origin point loses its stability when for and for > 0.5. With these expressions,
it can be easily seen that oscillation-free associative retrieval is possible up to if the gain is equal to 2.
Adding a positive diagonal element to W shifts the extremal eigenvalues min and max to and , respectively,
and thus increases the value of for oscillation-free associative retrieval of fundamental memories to a
maximum value of , which exists at = 2. The recall regions for several values of are shown in Figure 7.2.4.
Here, one should note that the increase in the size of the recall region does not necessarily imply increased
error correction. On the contrary, a large diagonal term greatly reduces the size of the basins of attraction of
fundamental memories as was seen earlier for the binary projection DAM. The reader is referred to Section
7.4.3 for further exploration into the effects of the diagonal term on DAM performance.

Figure 7.2.4. The recall region of a parallel updated analog projection DAM for various values of diagonal
element . (Adapted from C. M. Marcus et al., 1990, with permission of the American Physical Society.)

7.3 Characteristics of High-Performance DAMs

Based on the above analysis and comparison of DAM retrieval performance, a set of desirable performance
characteristics can be identified. Figures 7.3.1 (a) and (b) present a conceptual diagram of the state space for
high- and low-performance DAMs, respectively (Hassoun, 1993).

Figure 7.3.1. A conceptual diagram comparing the state space of (a) high-performance and (b) low-
performance autoassociative DAMs.

The high-performance DAM in Figure 7.3.1(a) has large basins of attraction around all fundamental
memories. It has a relatively small number of spurious memories, and each spurious memory has a very
small basin of attraction. This DAM is stable in the sense that it exhibits no oscillations. The shaded
background in this figure represents the region of state space for which the DAM converges to a unique
ground state (e.g., zero state). This ground state acts as a default "no decision" attractor state where
unfamiliar or highly corrupted initial states converge to this default state.

A low performance DAM has one or more of the characteristics depicted conceptually in Figure 7.3.1 b. It
is characterized by its inability to store all desired memories as fixed points; those memories which are
stored successfully end up having small basins of attraction. The number of spurious memories is very high
for such a DAM, and they have relatively large basins of attraction. This low performance DAM may also
exhibit oscillations. Here, an initial state close to one of the stored memories has a significant chance of
converging to a spurious memory or to a limit cycle.

To summarize, high-performance DAMs must have the following characteristics (Hassoun and Youssef,
1989): (1) High capacity. (2) Tolerance to noisy and partial inputs. This implies that fundamental memories
have large basins of attraction. (3) The existence of only relatively few spurious memories and few or no
limit cycles with negligible size of basins of attraction. (4) Provision for a "no decision" default
memory/state; inputs with very low "signal-to-noise" ratios are mapped (with high probability) to this
default memory. (5) Fast memory retrievals. This list of high-performance DAM characteristics can act as
performance criteria for comparing various DAM architectures and/or DAM recording recipes.

The capacity and performance of DAMs can be improved by employing optimal recording recipes (such as
the projection recipe) and/or using proper state updating schemes (such as serial updating) as was seen in
Section 7.2. Yet, one may also improve the capacity and performance of DAMs by modifying their basic
architecture or components. Such improved DAMs and other common DAM models are presented in the
next section.

7.4 Other DAM Models

As compared to the above models, a number of more sophisticated DAMs have been proposed in the
literature. Some of these DAMs are improved variations of the ones discussed above. Others, though, are
substantially different models with interesting behavior. The following is a sample of such DAMs [for a
larger sample of DAM models and a thorough analysis, the reader is referred to Hassoun (1993)].

7.4.1 Brain-State-in-a-Box (BSB) DAM

The "brain-state-in-a-box" (BSB) model ( Anderson et al., 1977is one of the earliest DAM models. It is a
discrete-time continuous-state parallel updated DAM whose dynamics are given by

(7.4.1)

where the input key is presented as the initial state x(0) of the DAM. Here, x(k), with 0 1, is a decay term
of the state x(k) and is a positive constant which represents feedback gain. The vector = [I1 I2 ... In]T
represents a scaled external input (bias) to the system, which persists for all time k. Some particular choices
for are = 0 (i.e., no external bias) or = . The operation F() is a piece-wise linear operator which maps the
ith component of its argument vector according to:

(7.4.2)

The BSB model gets its name from the fact that the state of the system is continuous and constrained to be
in the hypercube [−1, +1]n.

When operated as a DAM, the BSB model typically employs an interconnection matrix W given by the
correlation recording recipe to store a set of m n-dimensional bipolar binary vectors as attractors (located at
corners of the hypercube [−1, +1]n). Here, one normally sets = 0 and assumes the input to the DAM (i.e.,
x(0)) to be a noisy vector which may be anywhere in the hypercube [−1, +1]n. The performance of this
DAM with random stored vectors, large n, and m << n has been studied through numerical simulations by
Anderson (1993). These simulations particularly address the effects of model parameters and on memory
retrieval.

The stability of the BSB model in Equation (7.4.1) with symmetric W, = 0, and = 1 has been analyzed by
several researchers including Golden (1986), Greenberg (1988), Hui and ak (1992), and Anderson (1993).
In this case, this model reduces to

(7.4.3)

Golden (1986, 1993) analyzed the dynamics of the system in Equation (7.4.3) and found that it behaves as a
gradient descent system that minimizes the energy
(7.4.4)

He also proved that the dynamics in Equation (7.4.3) always converges to a local minimum of E(x) if W is
symmetric and min 0 (i.e., W is positive semidefinite) or , where min is the smallest eigenvalue of W. With
these conditions, the stable equilibria of this model are restricted to the surface and/or vertices of the
hypercube. It is interesting to note here, that when this BSB DAM employs correlation recording (with
preserved diagonal of W), it always converges to a minimum of E(x) because of the positive semidefinite
symmetric nature of the autocorrelation matrix. The following example illustrates the dynamics for a two-
state zero-diagonal correlation-recorded BSB DAM.

Example 7.4.1: Consider the problem of designing a simple BSB DAM which is capable fo storing the
memory vector x = [+1 −1]T. One possible way of recording this DAM with x is to employ the normalized
correlation recording recipe of Equation (7.1.6). This recording results in the symmetric weight matrix

after forcing the diagonal elements to zero. This matrix has the two eigenvalues min = −0.5 and max = 0.5.
The energy function for this DAM is given by Equation (7.4.4) and is plotted in Figure 7.4.1. The figure
shows two minima of equal energy at the state [+1 −1]T and its complement state [−1 +1]T, and two
maxima of equal energy at [+1 +1]T and [−1 −1]T. Simulations using the BSB dynamics of Equation (7.4.3)
are shown in Figure 7.4.2 for a number of initial states x(0). Using = 1 and = 0.3 resulted in convergence
to one of the two minima of E(x), as depicted in Figure 7.4.3a and 7.4.3b, respectively. The basins of
attraction of these stable states are equal in size and are separated by the line x2 = x1. Note that the values of
used here satisfy the condition . The effects of violating this condition on the stability of the DAM are
shown in Figure 7.4.3, where was set equal to five. The figure depicts a limit cycle or an oscillation
between the two states of maximum energy, [−1 −1]T and [+1 +1]T. This limit cycle was generated by
starting from x(0) = [0.9 0.7]T. Starting from x(0) = [0.9 0.6]T leads to convergence to the desired state
[+1 −1]T as depicted by the lower state trajectory in Figure 7.4.3. It is interesting to note how this state was
reached by bouncing back and forth off the boundaries of the state space, .

Figure 7.4.1. A plot of the energy function E(x) for the BSB DAM of Example 7.4.1. There are two minima
with energy E = −0.5 at states [+1 −1]T and [−1 +1]T, and two maxima with energy E = 0.5 at [+1 +1]T and
[−1 −1]T.

(a)
(b)

Figure 7.4.2. State space trajectories of a two-state BSB DAM, which employs a zero-diagonal
autocorrelation weight matrix to store the memory vector . The resulting weight matrix is symmetric with
min = −0.5 and . (a) = 1, and (b) = 0.3. Circles indicate state transitions. The lines are used as guides to the
eye.

Figure 7.4.3. State space trajectories of the BSB DAM of Figure 7.4.2, but with = 5. The limit cycle (top
trajectory) was obtained by starting from x(0) = [0.9 0.7]T. The converging dynamics (bottom trajectory)
was obtained by starting from x(0) = [0.9 0.6]T.

Greenberg (1988) showed the following interesting BSB DAM property. He showed that all vertices of a
BSB DAM are attractors (asymptotically stable equilibria) if

, i = 1, 2, ..., n (7.4.5)

Equation (7.4.5) defines what is referred to as a "strongly" row diagonal dominant matrix W. As an
example, it is noted that the BSB DAM with W = I has its vertices as attractors. For associative memories,
though, it is not desired to have all vertices (2n of them) of the hypercube as attractors. Therefore, a row
diagonally dominant weight matrix is to be avoided (recall that the interconnection matrix in a DAM is
usually treated by forcing its diagonal to zero).

A more general result concerning the stability of the verticies of the BSB model in Equation (7.4.1), with
= 1 and = , was reported by Hui and ak (1992, 1993). They showed that if for i = 1, 2, ..., n, then all
vertices of the bipolar hypercube are asymptotically stable equilibrium points. Here, W need not be
symmetric and Ii is an arbitrary constant bias input to the ith unit. Hui and ak also showed that, if W is
symmetric, a hypercube vertex x* which satisfies the condition

(7.4.6)

is a stable equilibrium. Here, []i signifies the ith component of the argument vector. Equation (7.4.6) is
particularly useful in characterizing the capacity of a zero-diagonal correlation-recorded BSB DAM where
m unbiased and independent random vectors xk {−1, +1}n are stored. Let xh be one of these vectors and
substitute it in Equation (7.4.6). Assuming the DAM is receiving no bias ( = 0), the inequality in Equation
(7.4.6) becomes

(7.4.7)

or

(7.4.8)
since and > 0. Thus, the vertex xh is an attractor if

(7.4.9)

or, equivalently

(7.4.10)

where the term inside the parenthesis is the cross-talk term. In section 7.2.1, it was determined that the
probability of the n inequalities in Equation (7.4.10) to be more than 99 percent correct for all m memories
approaches 1, in the limit of large n, if . Hence, it is concluded that the absolute capacity of the BSB DAM
for storing random bipolar binary vectors is identical to that of the discrete Hopfield DAM when correlation
recording is used with zero self-coupling (i.e., wii = 0 for all i). In fact, the present capacity result is stronger
than the absolute capacity result of Section 7.2.1; when is smaller than the condition of Equation (7.4.6) is
satisfied and, therefore, all xk vectors are stable equilibria.

7.4.2 Non-Monotonic Activations DAM

As indicated in Section 7.3, one way of improving DAM performance for a given recording recipe is by
appropriately designing the DAM components. Here, the idea is to design the DAM retrieval process so that
the DAM dynamics exploit certain known features of the synthesized interconnection matrix W. This
section presents correlation-recorded DAMs whose performance is significantly enhanced as a result of
modifying the activation functions of their units from the typical sgn- or sigmoid-type activation to more
sophisticated non-monotonic activations. Two DAMs are considered: A discrete-time discrete-state
parallel-updated DAM, and a continuous-time continuous-state DAM.

Discrete Model

First, consider the zero-diagonal correlation-recorded discrete Hopfield DAM discussed earlier in this
chapter. The retrieval dynamics of this DAM show some strange dynamical behavior. When initialized with
a vector x(0) that has an overlap p with one of the stored random memories, say x1, the DAM state x(k)
initially evolves towards x1 but does not always converge or stay close to x1 as shown in Figure 7.4.4. It has
been shown (Amari and Maginu, 1988) that the overlap , when started with p(0) less than some critical
value pc, initially increases but soon starts to decrease and ultimately stabilizes at a value less than 1. In this
case, the DAM converges to a spurious memory as depicted schematically by the trajectory on the right in
Figure 7.4.4. The value of pc increases (from zero) monotonically with the pattern ratio and increases
sharply from about 0.5 to 1 as becomes larger than 0.15, the DAM's relative capacity (note that pc can also
be written as where is the radius of attraction of a fundamental memory as in Section 7.2.1). This peculiar
phenomenon can be explained by first noting the effects of the overlaps p(k) and , on the ith unit weighted-
sum ui(k) given by

(7.4.11)

or, when written in terms of p(k) and qh(k),

(7.4.12)

Note the effects of the overlap terms qh(k) on the value of ui(k). The higher the overlaps with memories
other than x1, the larger the value of the cross-talk term [the summation term in Equation (7.4.12)] which, in
turn, drives |ui(k)| to large values. Morita (1993) showed, using simulations, that both the sum of squares of
the overlaps with all stored memories except x1, defined as

(7.4.13)

and p2(k) initially increase with k. Then, one of two scenarios might occur. In the first scenario, s(k) begins
to decrease and p2(k) continues to increase until it reaches 1; i.e., x(k) stabilizes at x1. Whereas in the
second scenario, s(k) continues to increase and may attain values larger than 1 while p2(k) decreases.
Figure 7.4.4. Schematic representation of converging trajectories in a correlation-recorded discrete Hopfield
DAM. When the distance (overlap) between x(0) and x1 is larger (smaller) than some critical value, the
DAM converges to a spurious memory (right hand side trajectory). Otherwise, the DAM retrieves the
fundamental memory x1. (From Neural Networks, 6, M. Morita, Associative Memory With Nonmonotone
Dynamics, pp. 115-126, Copyright 1993, with kind permission from Pergamon Press Ltd., Headington Hill
Hall, Oxford 0X3 0BW, UK.)

The above phenomenon suggests a method for improving DAM performance by modifying the dynamics of
the Hopfield DAM such that the state is forced to move in such a direction that s(k) is reduced, but not
p2(k). One such method is to reduce the influence of units with large |ui| values. Such neurons actually cause
the increase in s(k). The influence of a unit i with large |ui|, say |ui| > > 0, can be reduced by reversing the
sign of xi. This method can be implemented using the "partial reverse" dynamics (Morita, 1993) given by:

(7.4.14)

where > 0 and F and G are activation functions which operate componentwise on their vector arguments.
Here, F is the sgn activation function and G is defined by

(7.4.15)

where u is a component of the vector u = Wx(k). The values of parameters and must be determined with
care. Empirically, = 2.7 and may be chosen. These parameters are chosen so that the number of units which
satisfy |ui| > is small when x(k) is close to any of the stored memories provided is not too large. It should be
noted that Equation (7.4.14) does not always converge to a stable equilibrium. Numerical simulations show
that a DAM employing this partial reverse method has several advantages over the same DAM, but with
pure sgn activations. These advantages include a smaller critical overlap pc (i.e., wider basins of attraction
for fundamental memories), faster convergence, lower rate of convergence to spurious memories, and error
correction capability for pattern ratios up to .

Continuous Model

Consider the continuous-time DAM in Equation (7.1.21) with C = = I, and = 0. Namely,

(7.4.16)

where W is the usual zero-diagonal normalized autocorrelation matrix . Here, the partial reverse method
described above cannot be applied directly due to the continuous DAM dynamics. One can still capture the
essential elements of this method, though, by designing the activation function such that it reduces the
influence of unit i if |ui| is very large. This can be achieved by employing the non-monotone activation
function shown in Figure 7.4.5 with the following analytical form (Morita et al., 1990 a, b):

(7.4.17)

where , ', and are positive constants with typical values of 50, 15, 1 and 0.5, respectively. This non-
monotone activation function operates to keep the variance of |ui| from growing too large, and hence
implements a similar effect to the one implemented by the partial reverse method. Empirical results show
that this DAM has an absolute capacity that is proportional to n with substantial error correction
capabilities. Also, this DAM almost never converges to spurious memories when retrieval of a fundamental
memory is not successful; instead, the DAM state continues to wander (chaotically) without reaching any
equilibrium (Morita, 1993).

Figure 7.4.5. Non-monotonic activation function generated from Equation (7.4.17) with , , and .

Figure 7.4.6 gives simulation-based capacity curves (Yoshizawa et al., 1993a) depicting plots of the critical
overlap pc (i.e., 1 − 2 where is the radius of attraction of a fundamental memory) versus pattern ratio for
three DAMs with the dynamics in Equation (7.4.16). Two of the DAMs (represented by curves A and B)
employ a zero-diagonal autocorrelation interconnection matrix and a sigmoidal activation function. The
third DAM (curve C) employs projection recording with preserved W diagonal and the activation function
in Equation (7.4.17). As expected, the DAM with sigmoid activation (curve A) loses its associative retrieval
capabilities and the ability to retain the stored memories as fixed points (designated by the dashed portion
of curve A) as approaches 0.15. On the other hand, and with the same interconnection matrix, the non-
monotone activation DAM exhibits good associative retrieval for a wide range of pattern ratios even when
exceeds 0.5 (for example, Figure 7.4.6 (curve B) predicts a basin of attraction radius 0.22 at = 0.5. This
means that proper retrieval is possible from initial states having 22 percent or less random errors with any
one of the stored memories). It is interesting to note that this performance exceeds that of a projection-
recorded DAM with sigmoid activation function, represented by curve C. Note, though, that the
performance of the zero-diagonal projection-recorded discrete DAM with serial update of states (refer to
Section 7.2.2 and Figure 7.2.2) has (or ) which exceeds that of the non-monotone activations correlation-
recorded DAM. Still, the demonstrated retrieval capabilities of the non-monotone activations DAM are
impressive. The non-monotone dynamics can thus be viewed as extracting and using intrinsic information
from the autocorrelation matrix which the "sigmoid dynamics" is not capable of utilizing. For a theoretical
treatment of the capacity and stability of the non-monotone activations DAM, the reader is referred to
Yoshizawa et al. (1993a, b). Nishimori and Opris (1993) reported a discrete-time discrete-state version of
this model where the nonmonotonic activation function in Figure 7.4.5 is used with , = 1.0, and arbitrary
> 0. They showed that a maximum capacity of is possible with , and gave a complete characterization of
this model's capacity versus the parameter .

Figure 7.4.6. Simulation generated capacity/error correction curves for the continuous-time DAM of
Equation (7.4.16). Curves A and B represent the cases of zero-diagonal correlation-recorded DAM with
sigmoidal activation function and non-monotone activation function, respectively. Curve C is for a
projection recorded (preserved diagonal) DAM with sigmoidal activation function which is given for
comparison purposes. (From Neural Networks, 6, S. Yoshizawa et al., Capacity of Associative Memory
Using a Nonmonotonic Neuron Model, pp. 167-176, with kind permission from Pergamon Press Ltd.,
Headington Hill Hall, Oxford 0X3 0BW, UK.)
7.4.3 Hysteretic Activations DAM

Associative recall of DAMs can be improved by introducing hysteresis to the units' activation function. This
phenomenon is described next in the context of a discrete Hopfield DAM. Here, the interconnection matrix
W is the normalized zero-diagonal autocorrelation matrix with the ith DAM state updated according to:

(7.4.18)

where the activation function fi is given by:

(7.4.19)

The following discussion assumes a hysteretic parameter i = for all units i = 1, 2, ..., n. A plot of this
activation function is given in Figure 7.4.7 which shows a hysteretic property controlled by the parameter
> 0. Qualitatively speaking, the hysteresis term xi(k) in Equation (7.4.19) favors a unit to stay in its current
state xi(k); the larger the value of , the higher the tendency of unit i to retain its current state.

Figure 7.4.7. Transfer characteristics of a unit with hysteretic activation function.

In the previous section, it was noted that when the state of a DAM is not far from a fundamental memory,
degradation of the associative retrieval process is caused by the units moving from the right states to the
wrong ones. The motivation behind the hysteretic property is that it causes units with proper response to
preserve their current state longer, thus increasing the chance for the DAM to correct its "wrong" states and
ultimately converge to the closest fundamental memory. But, simultaneously, there are units moving from
the wrong states to the right ones, and hysteresis tends to prevent these transitions as well. It has been
shown (Yanai and Sawada, 1990) that, for the proper choice of , the former process is more effective than
the later and associative retrieval can be enhanced. For small , Yanai and Sawada (1990) showed that the
absolute capacity of a zero-diagonal correlation-recorded DAM with hysteretic units is given by the limit

(7.4.20)

This result can be derived by first noting that the ith bit transition for the above DAM may be described by
Equation (7.2.2) with xi(0) replaced by (1 + )xi(0), and following a derivation similar to that in Equations
(7.2.3) through (7.2.6). By comparing Equations (7.4.20) and (7.2.6) we find that hysteresis leads to a
substantial increase in the number of memorizable vectors as compared to using no hysteresis at all. Yanai
and Sawada also showed that the relative capacity increases with (e.g., the relative capacity increases from
0.15 at = 0 to about 0.25 at ). This implies that the basin of attraction of fundamental memories increase
when hysteresis is employed. Empirical results suggest that a value of slightly higher than (with << 1)
leads to the largest basin of attraction size around fundamental memories.

Hysteresis can arise from allowing a non-zero diagonal element wii, i = 1, 2, ..., n. To see this, consider a
discrete DAM with a normalized autocorrelation matrix . Note that this matrix has diagonal elements wii = ,
i = 1, 2, ..., n. Now, the update rule for the ith unit is

(7.4.21)

Comparing Equation (7.4.21) to Equation (7.4.19) of a hysteretic activation DAM reveals that the two
DAMs are mathematically equivalent if is set equal to (it is interesting to note that is, approximately, the
empirically optimal value for the hysteretic parameter ). Therefore, it is concluded that preserving the
original diagonal in a correlation-recorded DAM is advantageous in terms of the quality of associative
retrieval, when << 1 (see Problem 7.4.8 for further explanation).

The advantages of small positive self-connections has also been demonstrated for projection-recorded
discrete DAMs. Krauth et al. (1988) have demonstrated that using a small positive diagonal element with
the projection-recorded discrete DAM increases the radius of attraction of fundamental memories if the
DAM is substantially loaded (i.e., is approximately greater than 0.4). For example, they found numerically
that for = 0.5, using a diagonal term of about 0.075 instead of zero increases the basins of attraction of
fundamental memories by about 50%. For the projection DAM, though, retaining the original diagonal
leads to relatively large values for the self-connections if the pattern ratio is large (for m unbiased random
memories , we have wii , i = 1, 2, ..., n). This greatly reduces the basin of attraction size for fundamental
memories as shown empirically in Section 7.2.2. All this suggests that the best approach is to give all self-
connections a small positive value << 1.

7.4.4 Exponential Capacity DAM

Up to this point, the best DAM considered has a capacity proportional to n, the number of units in the
DAM. However, very high capacity DAMs (e.g., m exponential in n) can be realized if one is willing to
consider more complex memory architectures than the ones considered so far. An exponential capacity
DAM was proposed by Chiueh and Goodman (1988, 1991). This DAM can store up to cn (c > 1) random
vectors xh {−1, +1}n, h = 1, 2, ..., m, with substantial error correction abilities. The exponential DAM is
described next.

Consider the architecture in Figure 7.4.8. This architecture describes a two-layer dynamic autoassociative
memory. The dynamics are nonlinear due to the nonlinear operations G and F. The output nonlinearity F
implements sgn activations which gives the DAM a discrete-state nature. The matrix X is the matrix whose
columns are the desired memory vectors xh, h = 1, 2, ..., m. This DAM may update its state x(k) in either
serial or parallel mode. The parallel updated dynamics are given in vector form as:

(7.4.22)

and in component form as:

(7.4.23)

where g is the scalar version of the operator G. Here, g is normally assumed to be a continuous monotone
non-decreasing function over [−n, +n]. The recording of a new memory vector xh is simply done by
augmenting the matrices XT and X with (xh)T and xh, respectively (this corresponds to allocating two new
n-input units, one in each layer, in the network of Figure 7.4.5. The choice of the first layer activation
functions, g, plays a critical role in determining the capacity and dynamics of this DAM. To see this,
assume first a simple linear activation function g() = . This assumption reduces the dynamics in Equation
(7.4.22) to

(7.4.24)

which is simply the dynamics of the correlation-recorded discrete DAM.

Figure 7.4.8. Architecture of a very high capacity discrete DAM.

On the other hand, if one chooses g() = (n + )q, q is an integer greater than 1, a higher-order DAM results
with polynomial capacity m nq for large n (Psaltis and Park, 1986). Exponential capacity is also possible
with proper choice of g. Such choices include g() = (n − )−a, a > 1 (Dembo and Zeitouni, 1988; Sayeh and
Han, 1987) and g() = a, a > 1 (Chiueh and Goodman, 1988).

The choice g() = a results in an "exponential" DAM with capacity m = cn and with error correction
capability. Here, c is a function of a and the radius of attraction of fundamental memories as depicted in
Figure 7.4.9. According to this figure, the exponential DAM is capable of achieving the ultimate capacity of
a binary state DAM, namely m = 2n, in the limit of large a. As one might determine intuitively, though, this
DAM has no error correction abilities at such loading levels. For relatively small a, one may come up with
an approximate linear relation between c and . For example, Figure 7.4.9 gives c 1.2 − 0.4 for a = . Hence,
an exponential DAM with nonlinearity can store up to (1.2 − 0.4)n random memories if it is desired that all
such memories have basins of attraction of size , 0 < (here, one pass retrieval is assumed).

Figure 7.4.9. Relation between the base constant c and fundamental memories basins of attraction radius ,
for various values of the nonlinearity parameter a, for an exponential DAM. This DAM has a storage
capacity of cn. (Adapted from T. D. Chiueh and R. M. Goodman, 1991, Recurrent Correlation Associative
Memories, IEEE Transactions on Neural Networks, 2(2), pp. 275-284, ©1991 IEEE.)

Chiueh and Goodman (1991) showed that a sufficient condition for the dynamics in Equation (7.4.23) to be
stable in both serial and parallel update modes is that the activation function g() be continuous and
monotone non-decreasing over [−n, +n]. This condition can be easily shown to be true for all choices of g()
indicated above.

7.4.5 Sequence Generator DAM

Autoassociative DAMs can be synthesized to act as sequence generators. In fact, no architectural changes
are necessary for a basic DAM to behave as a sequence generator. Furthermore, a simple correlation
recording recipe may still be utilized for storing sequences. Here, a simple sequence generator DAM (also
called temporal associative memory) is described whose dynamics are given by Equation (7.1.37) operating
in a parallel mode with Ii = 0, i = 1, 2, ..., n. Consider a sequence of mi distinct patterns Si: with ,
j = 1, 2, ..., mi. The length of this sequence is mi. This sequence is a cycle if with mi > 2. Here, the subscript
i on xi and mi refers to the ith sequence. An autoassociative DAM can store the sequence Si when the
DAM's interconnection matrix W is defined by

(7.4.25)

Note that the first term on the right hand side of Equation (7.4.25) represents the normalized correlation
recording recipe of the heteroassociations , j = 1, 2, ..., mi − 1, whereas the second term is an autocorrelation
that attempts to terminate the recollection process at the last pattern of sequence Si, namely, . Similarly, a
cycle (i.e., Si with and mi > 2) can be stored using Equation (7.4.25) with the autocorrelation term removed.

This DAM is also capable of storing autoassociations by treating them as sequences of length 1 and using
Equation (7.4.25) (here, the first term in Equation (7.4.25) vanishes). Finally, Equation (7.4.25) can be
extended for storing s distinct sequences Si by summing it over i, i = 1, 2, ..., s. Hence, this sequence
generator DAM is capable of simultaneous storage of sequences with different lengths and cycles with
different periods. However, associative retrieval may suffer if the loading of the DAM exceeds its capacity.
Also, the asymmetric nature of W in Equation (7.4.25) will generally lead to spurious cycles (oscillations)
of period two or higher.

Generally speaking, the capacity of the sequence generator DAM is similar to an autoassociative DAM if
unbiased independent random vectors/patterns are assumed in all stored sequences. This implies that the
effective number of stored vectors must be very small compared to n, in the limit of large n, for proper
associative retrieval.

7.4.6 Heteroassociative DAM


A Heteroassociative DAM (HDAM) is shown in the block diagram of Figure 7.4.10 (Okajima et al., 1987).
It consists of two processing paths which form a closed loop. The first processing path computes a vector
from an input x {−1, +1}n according to the parallel update rule

(7.4.26)

or its serial (asynchronous) version where one and only one unit updates its state at a given time. Here, F is
usually the sgn activation operator. Similarly, the second processing path computes a vector x according to

(7.4.27)

or its serial version. The vector y in Equation (7.4.27) is the same vector generated by Equation (7.4.26).
Also, because of the feedback employed, x in Equation (7.4.26) is given by Equation (7.4.27).

Figure 7.4.10. Block diagram of a Heteroassociative DAM.

The HDAM can be operated in either parallel or serial retrieval modes. In the parallel mode, the HDAM
starts from an initial state x(0) [y(0)], computes its state y (x) according to Equation (7.4.26) [Equation
(7.4.27)], and then updates state x (y) according to Equation (7.4.27) [Equation (7.4.26)]. This process is
iterated until convergence; i.e., until state x (or equivalently y) ceases to change. On the other hand, in the
serial update mode, only one randomly chosen component of the state x or y is updated at a given time.

Various methods have been proposed for storing a set of heteroassociations {xk, yk}, k = 1, 2, ..., m in the
HDAM. In most of these methods, the interconnection matrices W1 and W2 are computed independently by
requiring that all one-pass associations xk yk and yk xk, respectively, are perfectly stored. Here, it is
assumed that the set of associations to be stored forms a one-to-one mapping; otherwise, perfect storage
becomes impossible. Examples of such HDAM recording methods include the use of projection recording
(Hassoun, 1989 a, b) and Householder transformation-based recording (Leung and Cheung, 1991). These
methods require the linear independence of the vectors xk (also yk) for which a capacity of m = min(n, L) is
achievable. One draw back of these techniques, though, is that they do not guarantee the stability of the
HDAM; i.e., convergence to spurious cycles is possible. Empirical results show (Hassoun, 1989b) that
parallel updated projection-recorded HDAMs exhibit significant oscillatory behavior only at memory
loading levels close to the HDAM capacity.

Kosko (1987, 1988), independently, proposed a heteroassociative memory with the architecture of the
HDAM, but with the restriction . This memory is known as a bidirectional associative memory (BAM). The
interesting feature of a BAM is that it is stable for any choice of the real-valued interconnection matrix W
and for both serial and parallel retrieval modes. This can be shown by starting from the BAM's bounded
Liapunov (energy) function

(7.4.28)

and showing that each serial or parallel state update decrease E (see Problem 7.4.14). One can also prove
BAM stability by noting that a BAM can be converted to a discrete autoassociative DAM (discrete Hopfield
DAM) with state vector x' = [xT yT]T and interconnection matrix W' given by

(7.4.29)

Now, since W' is a symmetric zero-diagonal matrix, the autoassociative DAM is stable if serial update is
assumed as was discussed in Section 7.1.2 (also, see Problem 7.1.18). Therefore, the serially updated BAM
is stable. One may also use this equivalence property to show the stability of the parallel updated BAM
(note that a parallel updated BAM is not equivalent to the (nonstable) parallel updated discrete Hopfield
DAM. This is because either states x or y, but not both, are updated in parallel at each step.)

From above, it can be concluded that the BAM always converges to a local minimum of its energy function
defined in Equation (7.4.28). It can be shown (Wang et al., 1991) that these local minima include all those
that correspond to associations {xk, yk} which are successfully loaded into the BAM (i.e., associations
which are equilibria of the BAM dynamics.)

The most simple storage recipe for storing the associations as BAM equilibrium points is the correlation
recording recipe of Equation (7.1.2). This recipe guarantees the BAM requirement that the forward path and
backward path interconnection matrices W1 and W2 are the transpose of each other, since

(7.4.30)

and

(7.4.31)

However, some serious drawbacks of using the correlation recording recipe are low capacity and poor
associative retrievals; when m random associations are stored in a correlation-recorded BAM, the condition
m << min(n, L) must be satisfied if good associative performance is desired (Hassoun, 1989b; Simpson,
1990). Heuristics for improving the performance of correlation-recorded BAMs can be found in Wang et al.
(1990).

Before leaving this section, it should be noted that the above models of associative memories are by no
means exclusive. A number of other interesting models have been reported in the literature (interested
readers may find the volume edited by Hassoun (1993) useful in this regard.) Some of these models are
particularly interesting because of connections to biological memories (e.g., Kanerva, 1988 and 1993;
Alkon et al., 1993).

7.4 Other DAM Models

As compared to the above models, a number of more sophisticated DAMs have been proposed in the
literature. Some of these DAMs are improved variations of the ones discussed above. Others, though, are
substantially different models with interesting behavior. The following is a sample of such DAMs [for a
larger sample of DAM models and a thorough analysis, the reader is referred to Hassoun (1993)].

7.4.1 Brain-State-in-a-Box (BSB) DAM

The "brain-state-in-a-box" (BSB) model ( Anderson et al., 1977is one of the earliest DAM models. It is a
discrete-time continuous-state parallel updated DAM whose dynamics are given by

(7.4.1)

where the input key is presented as the initial state x(0) of the DAM. Here, x(k), with 0 1, is a decay term
of the state x(k) and is a positive constant which represents feedback gain. The vector = [I1 I2 ... In]T
represents a scaled external input (bias) to the system, which persists for all time k. Some particular choices
for are = 0 (i.e., no external bias) or = . The operation F() is a piece-wise linear operator which maps the
ith component of its argument vector according to:

(7.4.2)

The BSB model gets its name from the fact that the state of the system is continuous and constrained to be
in the hypercube [−1, +1]n.

When operated as a DAM, the BSB model typically employs an interconnection matrix W given by the
correlation recording recipe to store a set of m n-dimensional bipolar binary vectors as attractors (located at
corners of the hypercube [−1, +1]n). Here, one normally sets = 0 and assumes the input to the DAM (i.e.,
x(0)) to be a noisy vector which may be anywhere in the hypercube [−1, +1]n. The performance of this
DAM with random stored vectors, large n, and m << n has been studied through numerical simulations by
Anderson (1993). These simulations particularly address the effects of model parameters and on memory
retrieval.

The stability of the BSB model in Equation (7.4.1) with symmetric W, = 0, and = 1 has been analyzed by
several researchers including Golden (1986), Greenberg (1988), Hui and ak (1992), and Anderson (1993).
In this case, this model reduces to

(7.4.3)

Golden (1986, 1993) analyzed the dynamics of the system in Equation (7.4.3) and found that it behaves as a
gradient descent system that minimizes the energy

(7.4.4)

He also proved that the dynamics in Equation (7.4.3) always converges to a local minimum of E(x) if W is
symmetric and min 0 (i.e., W is positive semidefinite) or , where min is the smallest eigenvalue of W. With
these conditions, the stable equilibria of this model are restricted to the surface and/or vertices of the
hypercube. It is interesting to note here, that when this BSB DAM employs correlation recording (with
preserved diagonal of W), it always converges to a minimum of E(x) because of the positive semidefinite
symmetric nature of the autocorrelation matrix. The following example illustrates the dynamics for a two-
state zero-diagonal correlation-recorded BSB DAM.

Example 7.4.1: Consider the problem of designing a simple BSB DAM which is capable fo storing the
memory vector x = [+1 −1]T. One possible way of recording this DAM with x is to employ the normalized
correlation recording recipe of Equation (7.1.6). This recording results in the symmetric weight matrix

after forcing the diagonal elements to zero. This matrix has the two eigenvalues min = −0.5 and max = 0.5.
The energy function for this DAM is given by Equation (7.4.4) and is plotted in Figure 7.4.1. The figure
shows two minima of equal energy at the state [+1 −1]T and its complement state [−1 +1]T, and two
maxima of equal energy at [+1 +1]T and [−1 −1]T. Simulations using the BSB dynamics of Equation (7.4.3)
are shown in Figure 7.4.2 for a number of initial states x(0). Using = 1 and = 0.3 resulted in convergence
to one of the two minima of E(x), as depicted in Figure 7.4.3a and 7.4.3b, respectively. The basins of
attraction of these stable states are equal in size and are separated by the line x2 = x1. Note that the values of
used here satisfy the condition . The effects of violating this condition on the stability of the DAM are
shown in Figure 7.4.3, where was set equal to five. The figure depicts a limit cycle or an oscillation
between the two states of maximum energy, [−1 −1]T and [+1 +1]T. This limit cycle was generated by
starting from x(0) = [0.9 0.7]T. Starting from x(0) = [0.9 0.6]T leads to convergence to the desired state
[+1 −1]T as depicted by the lower state trajectory in Figure 7.4.3. It is interesting to note how this state was
reached by bouncing back and forth off the boundaries of the state space, .

Figure 7.4.1. A plot of the energy function E(x) for the BSB DAM of Example 7.4.1. There are two minima
with energy E = −0.5 at states [+1 −1]T and [−1 +1]T, and two maxima with energy E = 0.5 at [+1 +1]T and
[−1 −1]T.
(a)

(b)

Figure 7.4.2. State space trajectories of a two-state BSB DAM, which employs a zero-diagonal
autocorrelation weight matrix to store the memory vector . The resulting weight matrix is symmetric with
min = −0.5 and . (a) = 1, and (b) = 0.3. Circles indicate state transitions. The lines are used as guides to the
eye.

Figure 7.4.3. State space trajectories of the BSB DAM of Figure 7.4.2, but with = 5. The limit cycle (top
trajectory) was obtained by starting from x(0) = [0.9 0.7]T. The converging dynamics (bottom trajectory)
was obtained by starting from x(0) = [0.9 0.6]T.

Greenberg (1988) showed the following interesting BSB DAM property. He showed that all vertices of a
BSB DAM are attractors (asymptotically stable equilibria) if

, i = 1, 2, ..., n (7.4.5)

Equation (7.4.5) defines what is referred to as a "strongly" row diagonal dominant matrix W. As an
example, it is noted that the BSB DAM with W = I has its vertices as attractors. For associative memories,
though, it is not desired to have all vertices (2n of them) of the hypercube as attractors. Therefore, a row
diagonally dominant weight matrix is to be avoided (recall that the interconnection matrix in a DAM is
usually treated by forcing its diagonal to zero).

A more general result concerning the stability of the verticies of the BSB model in Equation (7.4.1), with
= 1 and = , was reported by Hui and ak (1992, 1993). They showed that if for i = 1, 2, ..., n, then all
vertices of the bipolar hypercube are asymptotically stable equilibrium points. Here, W need not be
symmetric and Ii is an arbitrary constant bias input to the ith unit. Hui and ak also showed that, if W is
symmetric, a hypercube vertex x* which satisfies the condition

(7.4.6)
is a stable equilibrium. Here, []i signifies the ith component of the argument vector. Equation (7.4.6) is
particularly useful in characterizing the capacity of a zero-diagonal correlation-recorded BSB DAM where
m unbiased and independent random vectors xk {−1, +1}n are stored. Let xh be one of these vectors and
substitute it in Equation (7.4.6). Assuming the DAM is receiving no bias ( = 0), the inequality in Equation
(7.4.6) becomes

(7.4.7)

or

(7.4.8)

since and > 0. Thus, the vertex xh is an attractor if

(7.4.9)

or, equivalently

(7.4.10)

where the term inside the parenthesis is the cross-talk term. In section 7.2.1, it was determined that the
probability of the n inequalities in Equation (7.4.10) to be more than 99 percent correct for all m memories
approaches 1, in the limit of large n, if . Hence, it is concluded that the absolute capacity of the BSB DAM
for storing random bipolar binary vectors is identical to that of the discrete Hopfield DAM when correlation
recording is used with zero self-coupling (i.e., wii = 0 for all i). In fact, the present capacity result is stronger
than the absolute capacity result of Section 7.2.1; when is smaller than the condition of Equation (7.4.6) is
satisfied and, therefore, all xk vectors are stable equilibria.

7.4.2 Non-Monotonic Activations DAM

As indicated in Section 7.3, one way of improving DAM performance for a given recording recipe is by
appropriately designing the DAM components. Here, the idea is to design the DAM retrieval process so that
the DAM dynamics exploit certain known features of the synthesized interconnection matrix W. This
section presents correlation-recorded DAMs whose performance is significantly enhanced as a result of
modifying the activation functions of their units from the typical sgn- or sigmoid-type activation to more
sophisticated non-monotonic activations. Two DAMs are considered: A discrete-time discrete-state
parallel-updated DAM, and a continuous-time continuous-state DAM.

Discrete Model

First, consider the zero-diagonal correlation-recorded discrete Hopfield DAM discussed earlier in this
chapter. The retrieval dynamics of this DAM show some strange dynamical behavior. When initialized with
a vector x(0) that has an overlap p with one of the stored random memories, say x1, the DAM state x(k)
initially evolves towards x1 but does not always converge or stay close to x1 as shown in Figure 7.4.4. It has
been shown (Amari and Maginu, 1988) that the overlap , when started with p(0) less than some critical
value pc, initially increases but soon starts to decrease and ultimately stabilizes at a value less than 1. In this
case, the DAM converges to a spurious memory as depicted schematically by the trajectory on the right in
Figure 7.4.4. The value of pc increases (from zero) monotonically with the pattern ratio and increases
sharply from about 0.5 to 1 as becomes larger than 0.15, the DAM's relative capacity (note that pc can also
be written as where is the radius of attraction of a fundamental memory as in Section 7.2.1). This peculiar
phenomenon can be explained by first noting the effects of the overlaps p(k) and , on the ith unit weighted-
sum ui(k) given by

(7.4.11)

or, when written in terms of p(k) and qh(k),


(7.4.12)

Note the effects of the overlap terms qh(k) on the value of ui(k). The higher the overlaps with memories
other than x1, the larger the value of the cross-talk term [the summation term in Equation (7.4.12)] which, in
turn, drives |ui(k)| to large values. Morita (1993) showed, using simulations, that both the sum of squares of
the overlaps with all stored memories except x1, defined as

(7.4.13)

and p2(k) initially increase with k. Then, one of two scenarios might occur. In the first scenario, s(k) begins
to decrease and p2(k) continues to increase until it reaches 1; i.e., x(k) stabilizes at x1. Whereas in the
second scenario, s(k) continues to increase and may attain values larger than 1 while p2(k) decreases.

Figure 7.4.4. Schematic representation of converging trajectories in a correlation-recorded discrete Hopfield


DAM. When the distance (overlap) between x(0) and x1 is larger (smaller) than some critical value, the
DAM converges to a spurious memory (right hand side trajectory). Otherwise, the DAM retrieves the
fundamental memory x1. (From Neural Networks, 6, M. Morita, Associative Memory With Nonmonotone
Dynamics, pp. 115-126, Copyright 1993, with kind permission from Pergamon Press Ltd., Headington Hill
Hall, Oxford 0X3 0BW, UK.)

The above phenomenon suggests a method for improving DAM performance by modifying the dynamics of
the Hopfield DAM such that the state is forced to move in such a direction that s(k) is reduced, but not
p2(k). One such method is to reduce the influence of units with large |ui| values. Such neurons actually cause
the increase in s(k). The influence of a unit i with large |ui|, say |ui| > > 0, can be reduced by reversing the
sign of xi. This method can be implemented using the "partial reverse" dynamics (Morita, 1993) given by:

(7.4.14)

where > 0 and F and G are activation functions which operate componentwise on their vector arguments.
Here, F is the sgn activation function and G is defined by

(7.4.15)

where u is a component of the vector u = Wx(k). The values of parameters and must be determined with
care. Empirically, = 2.7 and may be chosen. These parameters are chosen so that the number of units which
satisfy |ui| > is small when x(k) is close to any of the stored memories provided is not too large. It should be
noted that Equation (7.4.14) does not always converge to a stable equilibrium. Numerical simulations show
that a DAM employing this partial reverse method has several advantages over the same DAM, but with
pure sgn activations. These advantages include a smaller critical overlap pc (i.e., wider basins of attraction
for fundamental memories), faster convergence, lower rate of convergence to spurious memories, and error
correction capability for pattern ratios up to .

Continuous Model

Consider the continuous-time DAM in Equation (7.1.21) with C = = I, and = 0. Namely,


(7.4.16)

where W is the usual zero-diagonal normalized autocorrelation matrix . Here, the partial reverse method
described above cannot be applied directly due to the continuous DAM dynamics. One can still capture the
essential elements of this method, though, by designing the activation function such that it reduces the
influence of unit i if |ui| is very large. This can be achieved by employing the non-monotone activation
function shown in Figure 7.4.5 with the following analytical form (Morita et al., 1990 a, b):

(7.4.17)

where , ', and are positive constants with typical values of 50, 15, 1 and 0.5, respectively. This non-
monotone activation function operates to keep the variance of |ui| from growing too large, and hence
implements a similar effect to the one implemented by the partial reverse method. Empirical results show
that this DAM has an absolute capacity that is proportional to n with substantial error correction
capabilities. Also, this DAM almost never converges to spurious memories when retrieval of a fundamental
memory is not successful; instead, the DAM state continues to wander (chaotically) without reaching any
equilibrium (Morita, 1993).

Figure 7.4.5. Non-monotonic activation function generated from Equation (7.4.17) with , , and .

Figure 7.4.6 gives simulation-based capacity curves (Yoshizawa et al., 1993a) depicting plots of the critical
overlap pc (i.e., 1 − 2 where is the radius of attraction of a fundamental memory) versus pattern ratio for
three DAMs with the dynamics in Equation (7.4.16). Two of the DAMs (represented by curves A and B)
employ a zero-diagonal autocorrelation interconnection matrix and a sigmoidal activation function. The
third DAM (curve C) employs projection recording with preserved W diagonal and the activation function
in Equation (7.4.17). As expected, the DAM with sigmoid activation (curve A) loses its associative retrieval
capabilities and the ability to retain the stored memories as fixed points (designated by the dashed portion
of curve A) as approaches 0.15. On the other hand, and with the same interconnection matrix, the non-
monotone activation DAM exhibits good associative retrieval for a wide range of pattern ratios even when
exceeds 0.5 (for example, Figure 7.4.6 (curve B) predicts a basin of attraction radius 0.22 at = 0.5. This
means that proper retrieval is possible from initial states having 22 percent or less random errors with any
one of the stored memories). It is interesting to note that this performance exceeds that of a projection-
recorded DAM with sigmoid activation function, represented by curve C. Note, though, that the
performance of the zero-diagonal projection-recorded discrete DAM with serial update of states (refer to
Section 7.2.2 and Figure 7.2.2) has (or ) which exceeds that of the non-monotone activations correlation-
recorded DAM. Still, the demonstrated retrieval capabilities of the non-monotone activations DAM are
impressive. The non-monotone dynamics can thus be viewed as extracting and using intrinsic information
from the autocorrelation matrix which the "sigmoid dynamics" is not capable of utilizing. For a theoretical
treatment of the capacity and stability of the non-monotone activations DAM, the reader is referred to
Yoshizawa et al. (1993a, b). Nishimori and Opris (1993) reported a discrete-time discrete-state version of
this model where the nonmonotonic activation function in Figure 7.4.5 is used with , = 1.0, and arbitrary
> 0. They showed that a maximum capacity of is possible with , and gave a complete characterization of
this model's capacity versus the parameter .
Figure 7.4.6. Simulation generated capacity/error correction curves for the continuous-time DAM of
Equation (7.4.16). Curves A and B represent the cases of zero-diagonal correlation-recorded DAM with
sigmoidal activation function and non-monotone activation function, respectively. Curve C is for a
projection recorded (preserved diagonal) DAM with sigmoidal activation function which is given for
comparison purposes. (From Neural Networks, 6, S. Yoshizawa et al., Capacity of Associative Memory
Using a Nonmonotonic Neuron Model, pp. 167-176, with kind permission from Pergamon Press Ltd.,
Headington Hill Hall, Oxford 0X3 0BW, UK.)

7.4.3 Hysteretic Activations DAM

Associative recall of DAMs can be improved by introducing hysteresis to the units' activation function. This
phenomenon is described next in the context of a discrete Hopfield DAM. Here, the interconnection matrix
W is the normalized zero-diagonal autocorrelation matrix with the ith DAM state updated according to:

(7.4.18)

where the activation function fi is given by:

(7.4.19)

The following discussion assumes a hysteretic parameter i = for all units i = 1, 2, ..., n. A plot of this
activation function is given in Figure 7.4.7 which shows a hysteretic property controlled by the parameter
> 0. Qualitatively speaking, the hysteresis term xi(k) in Equation (7.4.19) favors a unit to stay in its current
state xi(k); the larger the value of , the higher the tendency of unit i to retain its current state.

Figure 7.4.7. Transfer characteristics of a unit with hysteretic activation function.

In the previous section, it was noted that when the state of a DAM is not far from a fundamental memory,
degradation of the associative retrieval process is caused by the units moving from the right states to the
wrong ones. The motivation behind the hysteretic property is that it causes units with proper response to
preserve their current state longer, thus increasing the chance for the DAM to correct its "wrong" states and
ultimately converge to the closest fundamental memory. But, simultaneously, there are units moving from
the wrong states to the right ones, and hysteresis tends to prevent these transitions as well. It has been
shown (Yanai and Sawada, 1990) that, for the proper choice of , the former process is more effective than
the later and associative retrieval can be enhanced. For small , Yanai and Sawada (1990) showed that the
absolute capacity of a zero-diagonal correlation-recorded DAM with hysteretic units is given by the limit

(7.4.20)

This result can be derived by first noting that the ith bit transition for the above DAM may be described by
Equation (7.2.2) with xi(0) replaced by (1 + )xi(0), and following a derivation similar to that in Equations
(7.2.3) through (7.2.6). By comparing Equations (7.4.20) and (7.2.6) we find that hysteresis leads to a
substantial increase in the number of memorizable vectors as compared to using no hysteresis at all. Yanai
and Sawada also showed that the relative capacity increases with (e.g., the relative capacity increases from
0.15 at = 0 to about 0.25 at ). This implies that the basin of attraction of fundamental memories increase
when hysteresis is employed. Empirical results suggest that a value of slightly higher than (with << 1)
leads to the largest basin of attraction size around fundamental memories.

Hysteresis can arise from allowing a non-zero diagonal element wii, i = 1, 2, ..., n. To see this, consider a
discrete DAM with a normalized autocorrelation matrix . Note that this matrix has diagonal elements wii = ,
i = 1, 2, ..., n. Now, the update rule for the ith unit is

(7.4.21)

Comparing Equation (7.4.21) to Equation (7.4.19) of a hysteretic activation DAM reveals that the two
DAMs are mathematically equivalent if is set equal to (it is interesting to note that is, approximately, the
empirically optimal value for the hysteretic parameter ). Therefore, it is concluded that preserving the
original diagonal in a correlation-recorded DAM is advantageous in terms of the quality of associative
retrieval, when << 1 (see Problem 7.4.8 for further explanation).

The advantages of small positive self-connections has also been demonstrated for projection-recorded
discrete DAMs. Krauth et al. (1988) have demonstrated that using a small positive diagonal element with
the projection-recorded discrete DAM increases the radius of attraction of fundamental memories if the
DAM is substantially loaded (i.e., is approximately greater than 0.4). For example, they found numerically
that for = 0.5, using a diagonal term of about 0.075 instead of zero increases the basins of attraction of
fundamental memories by about 50%. For the projection DAM, though, retaining the original diagonal
leads to relatively large values for the self-connections if the pattern ratio is large (for m unbiased random
memories , we have wii , i = 1, 2, ..., n). This greatly reduces the basin of attraction size for fundamental
memories as shown empirically in Section 7.2.2. All this suggests that the best approach is to give all self-
connections a small positive value << 1.

7.4.4 Exponential Capacity DAM

Up to this point, the best DAM considered has a capacity proportional to n, the number of units in the
DAM. However, very high capacity DAMs (e.g., m exponential in n) can be realized if one is willing to
consider more complex memory architectures than the ones considered so far. An exponential capacity
DAM was proposed by Chiueh and Goodman (1988, 1991). This DAM can store up to cn (c > 1) random
vectors xh {−1, +1}n, h = 1, 2, ..., m, with substantial error correction abilities. The exponential DAM is
described next.

Consider the architecture in Figure 7.4.8. This architecture describes a two-layer dynamic autoassociative
memory. The dynamics are nonlinear due to the nonlinear operations G and F. The output nonlinearity F
implements sgn activations which gives the DAM a discrete-state nature. The matrix X is the matrix whose
columns are the desired memory vectors xh, h = 1, 2, ..., m. This DAM may update its state x(k) in either
serial or parallel mode. The parallel updated dynamics are given in vector form as:

(7.4.22)

and in component form as:

(7.4.23)

where g is the scalar version of the operator G. Here, g is normally assumed to be a continuous monotone
non-decreasing function over [−n, +n]. The recording of a new memory vector xh is simply done by
augmenting the matrices XT and X with (xh)T and xh, respectively (this corresponds to allocating two new
n-input units, one in each layer, in the network of Figure 7.4.5. The choice of the first layer activation
functions, g, plays a critical role in determining the capacity and dynamics of this DAM. To see this,
assume first a simple linear activation function g() = . This assumption reduces the dynamics in Equation
(7.4.22) to

(7.4.24)
which is simply the dynamics of the correlation-recorded discrete DAM.

Figure 7.4.8. Architecture of a very high capacity discrete DAM.

On the other hand, if one chooses g() = (n + )q, q is an integer greater than 1, a higher-order DAM results
with polynomial capacity m nq for large n (Psaltis and Park, 1986). Exponential capacity is also possible
with proper choice of g. Such choices include g() = (n − )−a, a > 1 (Dembo and Zeitouni, 1988; Sayeh and
Han, 1987) and g() = a, a > 1 (Chiueh and Goodman, 1988).

The choice g() = a results in an "exponential" DAM with capacity m = cn and with error correction
capability. Here, c is a function of a and the radius of attraction of fundamental memories as depicted in
Figure 7.4.9. According to this figure, the exponential DAM is capable of achieving the ultimate capacity of
a binary state DAM, namely m = 2n, in the limit of large a. As one might determine intuitively, though, this
DAM has no error correction abilities at such loading levels. For relatively small a, one may come up with
an approximate linear relation between c and . For example, Figure 7.4.9 gives c 1.2 − 0.4 for a = . Hence,
an exponential DAM with nonlinearity can store up to (1.2 − 0.4)n random memories if it is desired that all
such memories have basins of attraction of size , 0 < (here, one pass retrieval is assumed).

Figure 7.4.9. Relation between the base constant c and fundamental memories basins of attraction radius ,
for various values of the nonlinearity parameter a, for an exponential DAM. This DAM has a storage
capacity of cn. (Adapted from T. D. Chiueh and R. M. Goodman, 1991, Recurrent Correlation Associative
Memories, IEEE Transactions on Neural Networks, 2(2), pp. 275-284, ©1991 IEEE.)

Chiueh and Goodman (1991) showed that a sufficient condition for the dynamics in Equation (7.4.23) to be
stable in both serial and parallel update modes is that the activation function g() be continuous and
monotone non-decreasing over [−n, +n]. This condition can be easily shown to be true for all choices of g()
indicated above.

7.4.5 Sequence Generator DAM

Autoassociative DAMs can be synthesized to act as sequence generators. In fact, no architectural changes
are necessary for a basic DAM to behave as a sequence generator. Furthermore, a simple correlation
recording recipe may still be utilized for storing sequences. Here, a simple sequence generator DAM (also
called temporal associative memory) is described whose dynamics are given by Equation (7.1.37) operating
in a parallel mode with Ii = 0, i = 1, 2, ..., n. Consider a sequence of mi distinct patterns Si: with ,
j = 1, 2, ..., mi. The length of this sequence is mi. This sequence is a cycle if with mi > 2. Here, the subscript
i on xi and mi refers to the ith sequence. An autoassociative DAM can store the sequence Si when the
DAM's interconnection matrix W is defined by

(7.4.25)

Note that the first term on the right hand side of Equation (7.4.25) represents the normalized correlation
recording recipe of the heteroassociations , j = 1, 2, ..., mi − 1, whereas the second term is an autocorrelation
that attempts to terminate the recollection process at the last pattern of sequence Si, namely, . Similarly, a
cycle (i.e., Si with and mi > 2) can be stored using Equation (7.4.25) with the autocorrelation term removed.

This DAM is also capable of storing autoassociations by treating them as sequences of length 1 and using
Equation (7.4.25) (here, the first term in Equation (7.4.25) vanishes). Finally, Equation (7.4.25) can be
extended for storing s distinct sequences Si by summing it over i, i = 1, 2, ..., s. Hence, this sequence
generator DAM is capable of simultaneous storage of sequences with different lengths and cycles with
different periods. However, associative retrieval may suffer if the loading of the DAM exceeds its capacity.
Also, the asymmetric nature of W in Equation (7.4.25) will generally lead to spurious cycles (oscillations)
of period two or higher.

Generally speaking, the capacity of the sequence generator DAM is similar to an autoassociative DAM if
unbiased independent random vectors/patterns are assumed in all stored sequences. This implies that the
effective number of stored vectors must be very small compared to n, in the limit of large n, for proper
associative retrieval.

7.4.6 Heteroassociative DAM

A Heteroassociative DAM (HDAM) is shown in the block diagram of Figure 7.4.10 (Okajima et al., 1987).
It consists of two processing paths which form a closed loop. The first processing path computes a vector
from an input x {−1, +1}n according to the parallel update rule

(7.4.26)

or its serial (asynchronous) version where one and only one unit updates its state at a given time. Here, F is
usually the sgn activation operator. Similarly, the second processing path computes a vector x according to

(7.4.27)

or its serial version. The vector y in Equation (7.4.27) is the same vector generated by Equation (7.4.26).
Also, because of the feedback employed, x in Equation (7.4.26) is given by Equation (7.4.27).

Figure 7.4.10. Block diagram of a Heteroassociative DAM.

The HDAM can be operated in either parallel or serial retrieval modes. In the parallel mode, the HDAM
starts from an initial state x(0) [y(0)], computes its state y (x) according to Equation (7.4.26) [Equation
(7.4.27)], and then updates state x (y) according to Equation (7.4.27) [Equation (7.4.26)]. This process is
iterated until convergence; i.e., until state x (or equivalently y) ceases to change. On the other hand, in the
serial update mode, only one randomly chosen component of the state x or y is updated at a given time.

Various methods have been proposed for storing a set of heteroassociations {xk, yk}, k = 1, 2, ..., m in the
HDAM. In most of these methods, the interconnection matrices W1 and W2 are computed independently by
requiring that all one-pass associations xk yk and yk xk, respectively, are perfectly stored. Here, it is
assumed that the set of associations to be stored forms a one-to-one mapping; otherwise, perfect storage
becomes impossible. Examples of such HDAM recording methods include the use of projection recording
(Hassoun, 1989 a, b) and Householder transformation-based recording (Leung and Cheung, 1991). These
methods require the linear independence of the vectors xk (also yk) for which a capacity of m = min(n, L) is
achievable. One draw back of these techniques, though, is that they do not guarantee the stability of the
HDAM; i.e., convergence to spurious cycles is possible. Empirical results show (Hassoun, 1989b) that
parallel updated projection-recorded HDAMs exhibit significant oscillatory behavior only at memory
loading levels close to the HDAM capacity.

Kosko (1987, 1988), independently, proposed a heteroassociative memory with the architecture of the
HDAM, but with the restriction . This memory is known as a bidirectional associative memory (BAM). The
interesting feature of a BAM is that it is stable for any choice of the real-valued interconnection matrix W
and for both serial and parallel retrieval modes. This can be shown by starting from the BAM's bounded
Liapunov (energy) function

(7.4.28)
and showing that each serial or parallel state update decrease E (see Problem 7.4.14). One can also prove
BAM stability by noting that a BAM can be converted to a discrete autoassociative DAM (discrete Hopfield
DAM) with state vector x' = [xT yT]T and interconnection matrix W' given by

(7.4.29)

Now, since W' is a symmetric zero-diagonal matrix, the autoassociative DAM is stable if serial update is
assumed as was discussed in Section 7.1.2 (also, see Problem 7.1.18). Therefore, the serially updated BAM
is stable. One may also use this equivalence property to show the stability of the parallel updated BAM
(note that a parallel updated BAM is not equivalent to the (nonstable) parallel updated discrete Hopfield
DAM. This is because either states x or y, but not both, are updated in parallel at each step.)

From above, it can be concluded that the BAM always converges to a local minimum of its energy function
defined in Equation (7.4.28). It can be shown (Wang et al., 1991) that these local minima include all those
that correspond to associations {xk, yk} which are successfully loaded into the BAM (i.e., associations
which are equilibria of the BAM dynamics.)

The most simple storage recipe for storing the associations as BAM equilibrium points is the correlation
recording recipe of Equation (7.1.2). This recipe guarantees the BAM requirement that the forward path and
backward path interconnection matrices W1 and W2 are the transpose of each other, since

(7.4.30)

and

(7.4.31)

However, some serious drawbacks of using the correlation recording recipe are low capacity and poor
associative retrievals; when m random associations are stored in a correlation-recorded BAM, the condition
m << min(n, L) must be satisfied if good associative performance is desired (Hassoun, 1989b; Simpson,
1990). Heuristics for improving the performance of correlation-recorded BAMs can be found in Wang et al.
(1990).

Before leaving this section, it should be noted that the above models of associative memories are by no
means exclusive. A number of other interesting models have been reported in the literature (interested
readers may find the volume edited by Hassoun (1993) useful in this regard.) Some of these models are
particularly interesting because of connections to biological memories (e.g., Kanerva, 1988 and 1993;
Alkon et al., 1993).

7.6 Summary

This chapter introduces a variety of associative neural memories and characterizes their capacity and their
error correction capability. In particular, attention is given to recurrent associative nets with dynamic
recollection of stored information.

The most simple associative memory is the linear associative memory (LAM) with correlation-recording of
real-valued memory patterns. Perfect storage in the LAM requires associations whose key patterns (input
patterns) are orthonormal. Furthermore, one only needs to have linearly independent key patterns if the
projection recording technique is used. This results in an optimal linear associative memory (OLAM) which
has noise suppression capabilities. If the stored associations are binary patterns and if a clipping
nonlinearity is used at the output of the LAM, then the orthonormal requirement on the key patterns may be
relaxed to a pseudo-orthogonal requirement. In this case, the associative memory is nonlinear.
Methods for improving the performance of LAMs, such as multiple training and adding specialized
associations to the training set, are also discussed. The remainder of the chapter deals with DAMs (mainly
single-layer autoassociative DAM's) which have recurrent architectures.

The stability, capacity, and associative retrieval properties of DAMs are characterized. Among the DAM
models discussed are the continuous-time continuous-state model (the analog Hopfield net), the discrete-
time continuous-state model, and the discrete-time discrete-state model (Hopfield's discrete net). The
stability of these DAMs is shown by defining appropriate Liapunov (energy) functions. A serious
shortcoming with the correlation-recorded versions of these DAMs is their inefficient memory storage
capacity, especially when error correction is required. Another disadvantage of these DAMs is the presence
of too many spurious attractors (or false memories) whose number grow exponentially in the size (number
of units) of the DAM.

Improved capacity and error correction can be achieved in DAMs which employ projection recording.
Several projection DAMs are discussed which differ in their state update dynamics and/or the nature of
their state: continuous versus discrete. It is found that these DAMs are capable of storing a number of
memories which can approach the number of units in the DAM. These DAMs also have good error
correction capabilities. Here, the presence of self-coupling (diagonal-weights) is generally found to have a
negative effect on DAM performance; substantial improvements in capacity and error correction capability
are achieved when self-coupling is eliminated.

In addition to the above DAMs, the following models are discussed: Brain-state-in-a-box (BSB) model,
non-monotonic activations model, hysteretic activations model, exponential capacity model, sequence
generator model, and heteroassociative model. Some of these models still employ simple correlation
recording for memory storage, yet the retrieval dynamics employed results in substantial improvement in
DAM performance; this is indeed the case when non-monotonic or hysteretic activations are used. A
generalization of the basic correlation DAM into a model with higher nonlinearities allows for storage of an
exponential (in memory size) number of associations with "good" error correction. It is also shown how
temporal associations (sequences) and heteroassociations can be handled by simple variation of the
recording recipe and intuitive architectural extension, respectively.

The chapter concludes by showing how a single-layer continuous-time continuous-state DAM can be
viewed as a gradient net and applied to search for solutions to combinatorial optimization problems.
8. GLOBAL SEARCH METHODS
FOR NEURAL NETWORKS
In Chapter 4, learning in neural networks was viewed as a search mechanism for a minimum of a multi-
dimensional criterion function or error function. There, and in subsequent chapters, gradient-based search
methods were utilized for discovering locally optimal weight configurations for single and multiple unit
nets. Also, in Section 7.5, gradient search was employed for descending on a computational energy function
to reach locally minimum points/states which may represent solutions to combinatorial optimization
problems.

This chapter discusses search methods which are capable of finding global optima of multi-modal multi-
dimensional functions. In particular, it discusses search methods which are compatible with neural network
learning and retrieval. These methods are expected to lead to "optimal" or "near optimal" weight
configurations, by allowing the network to escape local minima during training. Also, these methods can be
used to modify the gradient-type dynamics of recurrent neural nets (e.g., Hopfield's energy minimizing net),
so that the network is able to escape "poor" attractors.

First, a general discussion on the difference between local and global search is presented. A stochastic
gradient descent algorithm is introduced which extends local gradient search to global search. This is
followed by a general discussion of stochastic simulated annealing search for locating globally optimal
solutions. Next, simulated annealing is discussed in the context of stochastic neural nets for improved
retrieval and training. A mean-field approximation of simulated annealing for networks with deterministic
units is presented which offers a substantial speedup in convergence compared to stochastic simulated
annealing. The chapter also reviews the fundamentals of genetic algorithms and their application in the
training of multilayer neural nets. Finally, an improved hybrid genetic algorithm/gradient search method for
feedforward neural net training is presented along with simulations and comparisons to backprop.

8.1 Local Versus Global Search

Consider the optimization problem of finding the extreme point(s) of a real-valued multi-dimensional scalar
function (objective function) of the form y : R, where the search space is a compact subset of Rn. An
extreme point is a point x* in such that y(x*) takes on its maximum (or minimum) value. In the following
discussion, it will be assumed, without loss of generality, that by optimization we mean minimization. Thus,
an extreme point of y is the "global" minimum of y. Multiple extreme points may exist. In addition to the
global minimum (minima), the function y may also admit local minima. A point x* is a local minimum of y
if y(x*) < y(x) for all x such that ||x* − x|| , for some > 0. Figure 8.1.1 illustrates the concepts of global and

local minima for the uni-variate scalar function y(x) = x sin for x [0.05, 0.5].
Figure 8.1.1. Local and global minima for the function y(x) = x sin .

There exists several ways of determining the minima of a given function. Analytical techniques exploit
Fermat's Stationarity Principle which states that the gradient (or derivative in the uni-variate case) of y with
respect to x is zero at all minima (and maxima). Thus, one can find these minima (and maxima) by solving
the set of equations (possibly nonlinear) of the form y(x) = 0. Here, a solution x* is a minimum of y if the

Hessian matrix, , evaluated at x*, is positive definite (i.e., xTH(x*)x > 0 for all x 0), or

if in the uni-variate case. The global minimum may then be identified by direct evaluation
of y(x*). This method is theoretically sound as long as the function y is twice differentiable. In practical
situations, though, the above approach is inefficient due to the computational overhead involved in its
implementation on a digital computer.

Other, more efficient optimization techniques do exist. One example is the simple (steepest) gradient
descent algorithm introduced in Chapter 3 [e.g., see Equation (3.1.21)], which formed the basis for most of
the learning rules discussed so far in this book. Assuming y is differentiable, gradient descent can be
expressed according to the recursive rule:

(8.1.1)

where is a small positive constant, and t is a positive integer representing the iteration number. Given an
initial "guess" x(0) (e.g., a random point in ), the recursive rule in Equation (8.1.1) implements a search
strategy, whereby a sequence of vectors:

x(0), x(1), x(2), ..., x(t), ...

is generated such that

y(x(0)) y(x(1)) y(x(2)) ... y(x(t)) ...

and hence at each iteration we move closer to an optimal solution. Although computationally efficient (its
convergence properties and some extensions were considered in Section 5.2.3), gradient search methods
may lead only to local minima of y which happen to be "close" to the initial search point x(0). As an

example, consider the one-dimensional function y(x) = x sin shown in Figure 8.1.1. It is clear that
from this figure that given any initial search point x(0) [0.05, 0.12], the gradient descent algorithm will
always converge to one of the two local minima shown. In this case, the region of search space containing
the global solution will never be explored. Hence for local search algorithms, such as gradient descent, the
quality (optimality) of the final solution is highly dependent upon the selection of the initial search point.

Global minimization (optimization) requires a global search strategy; a strategy that cannot be easily fooled
by local minima. In Sections 8.2, 8.4 and 8.5 we will discuss three commonly used global search strategies.

For a survey of global optimization methods, the reader is referred to the book by Törn and ilinskas
(1989). As for the remainder of this section, we shall discuss extensions which help transform a simple
gradient search into a global search.

8.1.1 A Gradient Descent/Ascent Search Strategy


Using intuition and motivated by the saying: "There is a valley behind every mountain," we may employ
gradient descent/ascent in a simple search strategy which will allow the discovery of global minima.
Assuming a uni-variate objective function y(x), x [a, b] with a < b, start with x(0) = a and use gradient

descent search to reach the first local minimum . Next, save the value and proceed by ascending
the function y (using Equation (8.1.1) with the negative sign replaced by a positive sign) starting from the

initial point where x is a sufficiently small positive constant. Continue ascending until a
local maximum is reached. Now, switch back to gradient descent starting from the current point (maximum)

perturbed by x until convergence to the second local minimum , and save the value . This full
search cycle is repeated until the search reaches the point x = b. At this point, all minima of y over x [a, b]
have been obtained and the global minimum is the point satisfying

. This strategy will always lead to the global minimum when y is


a function of a single variable; i.e., when the search space is one-dimensional.

One may also use this strategy for multi-dimensional functions, though the search is not guaranteed to find
the global minimum. This is because when the search space has two or more dimensions, the perturbation x
is now a vector and it is unclear how to set the direction of x so that the search point visits all existing
minima. Also, in the multidimensional case, the above gradient descent/ascent strategy may get caught in a
repeating cycle, where the same local minimum and maximum solutions are repeatedly found. The reader is
encouraged to stare at Figure 8.1.2 which depicts a plot of a two-dimensional function to see how (starting
from a local minimum point) the choice of x effects the sequence of visited peaks and valleys.

Figure 8.1.2. A plot of a two-variable function showing multiple minima, maxima, and saddle points.

For such differentiable multivariate functions, an example of a search strategy which would be useful for
finding some "good" (sufficiently deep) local minima may be given as follows. Start the initial search point
at a corner of the search region () and randomly vary the direction of the perturbation x after each local
minimum (maximum) is found, such that x points in a random direction, away from the current minimum
(maximum). A similar global search strategy is one which replaces the gradient ascent step in the above
descent/ascent search strategy by a "tunneling step." Here, tunneling is used to move the search point away
from the current minimum to another point in its vicinity such that the new point is (hopefully) on a surface
of the search landscape which leads into a deeper minimum. This idea is similar to the one embodied in the
global descent search strategy discussed in Section 5.1.2.

8.1.2 Stochastic Gradient Search: Global Search via Diffusion

The above search methods are deterministic in nature. Stochastic (non-deterministic) ideas may also be
incorporated in gradient search-based optimization, thus leading to stochastic gradient algorithms. In fact,
for problems of moderate to high dimensions, the only feasible methods for global optimization are
stochastic in nature (Schoen, 1991). These methods utilize noise to perturb the function y(x) being
minimized in order to avoid being trapped in "bad" local minima. Though in order for the stochastic method
to converge to a "good" solution, this noise should be introduced appropriately and then subsequently
removed. That is, during the search process the perturbations to y are gradually removed so that the
effective function being minimized will become exactly y prior to reaching the final solution.

In its most basic form, the stochastic gradient algorithm performs gradient descent search on a perturbed

form (x, N) of the function y(x) where the perturbation is additive in nature:

(8.1.2)

Here, N = [N1, N2, ..., Nn]T is a vector of independent noise sources, and c(t) is a parameter which controls
the magnitude of noise. To achieve the gradual reduction in noise mentioned above, c(t) must be selected in
such a way that it approaches zero as t tends to infinity. A simple choice for c(t) is given by (Cichocki and
Unbehauen, 1993)

c(t) = e−t (8.1.3)

with 0 and > 0. Substituting the perturbed function in the gradient descent rule in Equation (8.1.1)
gives the stochastic gradient rule:

(8.1.4)

The stochastic gradient algorithm in Equation (8.1.4) is inspired by the dynamics of the diffusion process in
physical phenomena such as atomic migration in crystals or chemical reactions. The dynamics of the
diffusion process across potential barriers involve a combination of gradient descent and random motion
according to the Smoluchowski-Kramers equation (Aluffi-Pentini et al., 1985):

(8.1.5)

where E(x) is the potential energy, T is the absolute temperature, k is the Boltzmann constant, and m is the

reduced mass. The term is a stochastic force. Geman and Hwang (1986) [see also Chiang et al.
(1987)] developed a method for global optimization, which is essentially the simulation of an annealed
diffusion process. This method is based on Equation (8.1.5) with the temperature T made inversely
proportional to the logarithm of positive time (t) for almost guaranteed convergence to the global minimum.
The discrete-time version of Equation (8.1.5) with annealed temperature leads to the stochastic gradient rule
in Equation (8.1.4). Convergence analysis of a slightly modified version of Equation (8.1.4) can be found in
Gelfand and Mitter (1991).

The search rule in Equation (8.1.4) may be applied to any function y as long as the gradient information can
be determined or estimated. Unlike the gradient search rule in Equation (8.1.1), the stochastic rule in
Equation (8.1.4) may allow the search to escape local minima. Note that for zero mean statistically
independent noise, the present search method will follow, on average, the gradient of y.

The probability that stochastic gradient search leads to the global minimum solution critically depends on
the functional form of the noise amplitude schedule c(t). In the exponential schedule in Equation (8.1.3), the
coefficient controls the amplitude of noise and determines the rate of damping. A sufficiently large should
be used in order for the search to explore a large range of the search space. For large , the stochastic effects
decay very rapidly, and prematurely reduce the search to the simple deterministic gradient search, thus
increasing the probability of reaching a local suboptimal solution. Small values for are desirable since they
allow the stochastic search to explore a sufficiently large number of points on the search surface, which is
necessary for global optimization. However, very small values of lead to a very slow convergence process.
Thus, the coefficient needs to be chosen such that we strike a balance between the desire for fast
convergence and the need to ensure an optimal (or near optimal) solution. The following is an example of
the application of stochastic gradient search for finding the global solution.

Example 8.1.1: Let us use the stochastic gradient rule in Equation (8.1.4) to search for minima of the

function y(x) = x sin near the origin. The function y(x) has an infinite number of local minima which
become very dense in the region closest to x = 0. The function has three minima in the region x [0.05, 0.5],
as is shown in Figure 8.1.1, which are located approximately at 0.058, 0.091, and 0.223. This function is
even; thus for each minima x* > 0 there exists a symmetric (with respect to the vertical axis) minima at −x*.
The global minima of y are approximately at 0.223 and −0.223.

Differentiating y(x) and substituting in Equation (8.1.4) leads to the search rule

where c(t) = exp(−t), N(t) is normally distributed random noise with zero mean and unity variance, and

. Initial simulations are performed to test for proper setting of the parameters of c(t). Values of
> 100 and allowed the search to converge to "deep" minima of y(x). Figure 8.1.3 shows two
trajectories of the search point x for = 50 (dashed line) and = 200 (solid line) with = 0.02, and x(0) = 0.07.
The same noise sequence N(t) is used in computing these trajectories. In most simulations with x(0) close to
zero, searches with large lead to "deeper" minima compared to searches with small ( < 100). For = 0, the
search becomes a pure gradient descent search, and with x(0) = 0.07 the local minima at 0.058 will always
be reached. On the other hand, for > 200, the search has a very good chance of converging to the global
minimum at 0.223 or its neighboring minimum at x* 0.091 (some simulations also led to the minima at
0.058, −0.058, −0.091 and ).

Figure 8.1.3. Search trajectories generated by the stochastic gradient search method of Equation (8.1.4) for

the function y(x) = x sin . The search started at x(0) = 0.07 and converged to the local minimum
x* 0.091 for a noise gain coefficient = 50 (dashed line) and to the global minimum x* 0.223 for
(solid line).

The stochastic gradient rule in Equation (8.1.4) can be applied to all of the gradient descent-based learning
rules for both single and multilayer feedforward neural nets. This type of weight update is sometimes
referred to as Langevin-type learning (Heskes and Kappen, 1993b). For example, the Langevin-type
backprop is easily formulated by adding the decaying noise terms cl(t)N(t) and cj(t)N(t) to the right hand
side of the batch version of Equations (5.1.2) and (5.1.9), respectively. The subscripts l and j imply the
possibility for using different noise magnitude schedules for the output and hidden layers and/or units. The
training of multilayer nets according to Langevin-type backprop can be more computationally effective than
deterministic gradient descent (i.e., batch backprop). This is because a fast suboptimal schedule for c(t) can
be used and still lead (on the average) to better solutions than gradient descent. Also, stochastic gradient
search has a better chance of escaping local minima and areas of shallow gradient, which may allow it to
converge faster (Hoptroff and Hall, 1989). It should be noted here that the incremental update version of
backprop [Equations (5.1.2) and (5.1.9)] may also be viewed as a stochastic gradient rule; though, the
stochasticity is intrinsic to the gradient itself, due to the nature of the minimization of an "instantaneous"
error and the random presentation order of the training vectors, as opposed to being artificially introduced,
as in Langevin-type learning.

8.2 Simulated Annealing-Based Global Search

The success of global search methods in locating a globally optimal solution (say, a global minimum) of a
given function y(x) over x hinges on a balance between an exploration process, a guidance process, and a
convergence-inducing process. The exploration process gives the search a mechanism of sampling a
sufficiently diverse set of points x in . This exploration process is usually stochastic in nature. The guidance
process is an explicit or implicit process which evaluates the relative "quality" of search points (e.g., two
consecutive search points) and biases the exploration process to move towards regions of high quality
solutions in . Finally, the convergence-inducing process assures the ultimate convergence of the search to a
fixed solution x*. The dynamic interaction among the above three processes is thus responsible for giving
the search process its global optimizing character. As an exercise, one might consider identifying the above
three processes in the global optimization method of stochastic gradient descent presented in the previous
section. Here, the exploration process is realized by the noise term in Equation (8.1.4). The convergence-
inducing process is realized effectively by the noise amplitude schedule c(t) and by the gradient term y(x).
On the other hand, the search guidance process is not readily identifiable. In fact, this method lacks an
effective guidance process and the only guidance available is the local guidance due to the gradient term.
Note that gradient-based guidance is only effective when the function y(x) being minimized has its global
minimum (or a near optimal minimum) located at the bottom of a wide "valley" relative to other shallow

local minima. A good example of such a function is y(x) = x sin .

Another stochastic method for global optimization is (stochastic) simulated annealing (Kirkpatrick et al.,
1983; Kirkpatrick, 1984; Aart and Korst, 1989). This method does not use gradient information explicitly
and thus is applicable to a wider range of functions (specifically, functions whose gradients are expensive to
compute or functions which are not differentiable) compared to the stochastic gradient method.

Simulated annealing is analogous to the physical behavior of annealing a molten metal (Metropolis et al,
1953; Kirkpatrick et al.,1983). Above its melting temperature, a metal enters a phase where atoms
(particles) are positioned at random according to statistical mechanics. As with all physical systems, the
particles of the molten metal seek minimum energy configurations (states) if allowed to cool. A minimum
energy configuration means a highly ordered state such as a defect-free crystal lattice. In order to achieve
the defect-free crystal, the metal is annealed: First, the metal is heated above its melting point and then
cooled slowly until the metal solidifies into a "perfect" crystalline structure. Slow cooling (as opposed to
quenching) is necessary to prevent dislocations of atoms and other crystal lattice disruption. The defect-free
crystal state corresponds to the global minimum energy configuration.

Next, a brief presentation of related statistical mechanics concepts is given which will help us understand
the underlying principles of the simulated annealing global optimization method. Statistical mechanics is
the central discipline of condensed matter physics which deals with the behavior of systems with many
degrees of freedom in thermal equilibrium at a finite temperature [for a concise presentation of the basic
ideas of statistical mechanics, see Shrödinger (1946)]. The starting point of statistical mechanics is an
energy function E(x) which measures the thermal energy of a physical system in a given state x, where x
belongs to a set of possible states . If the system's absolute temperature T is not zero (i.e., T > 0), then the
state x will vary in time causing E to fluctuate. Being a physical system, the system will evolve its state in
an average direction corresponding to that of decreasing energy E. This continues until no further decrease
in the average of E is possible, which indicates that the system has reached thermal equilibrium. A
fundamental result from physics is that at thermal equilibrium each of the possible states x occurs with
probability

(8.2.1)

where k is Boltzmann's constant and the denominator is a constant which restricts P(x) between zero and
one. Equation (8.2.1) is known as the Boltzmann-Gibbs distribution.

Now define a set of transition probabilities W(x x') from a state x into x'. What is the condition on

so that the system may reach and then remain in thermal equilibrium? A sufficient condition
for maintaining equilibrium is that the average number of transitions from x to x' and from x' to x be equal:

P(x) W(x x') = P(x') W(x' x) (8.2.2)

or, by dividing by W(x x') and using Equation (8.2.1)

(8.2.3)

where E = E(x') − E(x). In simulating physical systems, a common choice is the Metropolis state transition
probability (Metropolis et al., 1953) where

(8.2.4)

This has the advantage of making more transitions to lower energy states than those in Equation (8.2.3), and
therefore reaches equilibrium more rapidly. Note that transitions from low to high energy states are possible
except when T = 0.

Simulated annealing optimization for finding global minima of a function y(x) borrows from the above
theory. Two operations are involved in simulated annealing: a thermostatic operation which schedules
decreases in the "temperature" and a stochastic relaxation operation which iteratively finds the equilibrium
solution at the new temperature using the final state of the system at the previous temperature as a starting
point. Here, a function y of discrete or continuous variables can be thought of as the energy function E in
the above analysis. Simulated annealing introduces artificial thermal noise which is gradually decreased in
time. This noise is controlled by a new parameter T which replaces the constant kT in Equation (8.2.4).
Noise allows occasional hill-climbing interspersed with descents. The idea is to apply uniform random
perturbations x to the search point x and then determine the resulting change y = y(x + x) − y(x). If the value
of y is reduced (i.e., y < 0) the new search point x' = x + x is adopted. On the other hand, if the perturbation
leads to an increase in y (i.e., y > 0) the new search point x' may or may not be adopted. In this case, the
determination of whether to accept x' or not is stochastic, with probability . Hence, for large
values of T, the probability of an uphill move in y is large. However, for small T the probability of an uphill
move is low; i.e., as T decreases fewer uphill moves are allowed. This leads to an effective guidance of
search since the uphill moves are done in a controlled fashion, so there is no danger of jumping out of a
local minimum and falling into a worse one.

The following is a step-by-step statement of a general purpose simulated annealing optimization algorithm
for finding the global minimum of a multi-variate function y(x), x .

1. Initialize x to an arbitrary point in . Choose a "cooling" schedule for T. Initialize T at a sufficiently large
value.

2. Compute x' = x + x, where x is a small uniform random perturbation.

3. Compute y = y(x') − y(x).

4. Use the Metropolis transition rule for deciding whether to accept x' as the new search point (or else
remain at x). That is if y < 0, the search point becomes x', otherwise accept x' as the new point with a

transition probability W(x x') = . For this purpose select a uniform random number a
between zero and one. If W(x x') > a then x' becomes the new search point, otherwise the search point
remains at x.

5. Repeat steps 2 through 4 until the system reaches an equilibrium. Equilibrium is reached when the
number of accepted transitions becomes insignificant which happens when the search point is at or very
close to a local minimum. In practice, steps 2 through 4 may be repeated for a fixed prespecified number of
cycles.

6. Update T according to the annealing schedule chosen in step 1 and repeat steps 2 through 5. Stop when T
reaches zero.

The effectiveness of simulated annealing in locating global minima and its speed of convergence critically
depends on the choice of the cooling schedule for T. Generally speaking, if the cooling schedule is too fast,
a premature convergence to a local minimum might occur, and if it is too slow, the algorithm will require an
excessive amount of computation time to converge. Unfortunately, it has been found theoretically (Geman
and Geman, 1984) that T must be reduced very slowly in proportion to the inverse log of time (processing
cycle)

(8.2.5)

to guarantee that the simulated annealing search converges almost always to the global minimum. The
problem of accelerating simulated annealing search has received increased attention, and a number of
methods have been proposed to accelerate the search (e.g., Szu, 1986; Salamon et al., 1988). In practice, a
suboptimal solution is sometimes sufficient and faster cooling schedules may be employed. For example,
one may even try a schedule of the form T(t) = T(t − 1) where 0.85 0.98, which reduces the temperature
exponentially fast.

Because of its generality as a global optimization method, simulated annealing has been applied to many
optimization problems. Example applications can be found in Geman and Geman (1984), Sontag and
Sussann (1985), Ligthart et al. (1986), Romeo (1989), Rutenbar (1989), and Johnson et al. (1989; 1991).
By now the reader might be wondering about the applicability of simulating annealing to neural networks.
As is shown in the next section, it turns out that simulated annealing can be naturally mapped onto recurrent
neural networks with stochastic units, for the purpose of global retrieval and/or optimal learning. Simulated
annealing may also be easily applied to the training of deterministic multilayer feedforward nets. Here, one
can simply interpret the error function E(w) [e.g., the SSE function in Equation (5.1.13) or the "batch"
version of the entropy function in Equation (5.2.16)] as the multi-variate scalar function y(x) in the above
algorithm. However, because of its intrinsic slow search speed simulated annealing should only be
considered, in the training of deterministic multilayer nets, if a global or near global solution is desired and
if one suspects that E is a complex multimodal function.

8.3 Simulated Annealing for Stochastic Neural Networks

In a stochastic neural network the units have non-deterministic activation functions as discussed in Section
3.1.6. Here, units behave stochastically with the output (assumed to be bipolar binary) of the ith unit taking
the value xi = +1 with probability f(neti) and value xi = −1 with probability 1 − f(neti), where neti is the
weighted-sum (net input) of unit i and

(8.
3.1)

where P(xi) represents the probability distribution of xi. There are several possible choices of f which could
have been made in Equation (8.3.1), but the choice of the sigmoid function is motivated by statistical
mechanics [Glauber, 1963; Little, 1974. See also Amari (1971)] where the units behave stochastically
exactly like the spins in an Ising model of a magnetic material in statistical physics [for more details on the
connections between stochastic units and the Ising model, the reader is referred to Hinton and Sejnowski
(1983), Peretto (1984), Amit (1989), and Hertz et al., (1991)]. Equation (8.3.1) may also be derived based
on observations relating to the stochastic nature of the post-synaptic potential of biological neurons ( Shaw
and Vasudevan, 1974). Here, the neuron may be approximated as a linear threshold gate (LTG) [recall
Equation (1.1.1)] with zero threshold, signum activation function, and with the net input (post-synaptic
potential) being a Gaussian random variable as explored in Problem 8.3.1.

The parameter in Equation (8.3.1) controls the steepness of the sigmoid f(net) at net = 0. We may think of

as the reciprocal pseudo-temperature, . When the "temperature" approaches zero, the sigmoid
becomes a step function and the stochastic unit becomes deterministic. As T increases, this sharp threshold
is "softened" in a stochastic way, thus making the unit stochastic. Next, it is shown how stochastic neural
nets with units described by Equation (8.3.1) and with controllable temperature T form a natural substrate
for the implementation of simulated annealing-based optimization.

8.3.1 Global Convergence in a Stochastic Recurrent Neural Net: The Boltzmann Machine

Since simulated annealing is a global optimization method, one might be tempted to consider its use to
enhance the convergence to optimal minima of gradient-type nets such as the Hopfield net used for
combinatorial optimization. In combinatorial optimization problems one is interested in finding a solution
vector x {−1, 1}n (or {0, 1}n) which best minimizes an objective function y(x). When y(x) is quadratic, the
continuous Hopfield net may be used to search for local minimizers of y(x). This is achieved by first
mapping y(x) onto the quadratic energy function of the Hopfield net and then using the net as a gradient
descent algorithm to minimize E(x), thus minimizing y(x) (see Section 7.5 for details).

Rather than using the suboptimal Hopfield net approach, the desired global minimum of y(x) may be
obtained by a direct application of the simulated annealing method of Section 8.2. However, the promise of
fast analog hardware implementations of the Hopfield net leads us to take another look at the network
optimization approach, but with modifications which improve convergence to global solutions. The
following is an optimization method based on an efficient way of incorporating simulated annealing search
in a discrete Hopfield net. This method, though, is only suitable for quadratic functions.

Consider the discrete-state discrete-time recurrent net (discrete Hopfield model) with ith unit dynamics
described by Equation (7.1.37), repeated here for convenience:

(8.3.2)

This deterministic network has a quadratic Liapunov function (energy function) if its weight matrix is
symmetric with positive (or zero) diagonal elements and if serial state update is used (recall Problem
7.1.18). In other words, this net will always converge to one of the local minima of its energy function E(x)
given by

(8.3.3)

Next, consider a stochastic version of the above net, referred to as the stochastic Hopfield net, where we
replace the deterministic threshold units by stochastic units according to Equation (8.3.1). By employing
Equation (8.3.1) and assuming "thermal" equilibrium one may find the transition probability from xi to −xi
(i.e., the probability to flip unit i from +1 to −1 or vice versa) as

(8.3.4)

The right most term in Equation (8.3.4) is obtained by using Equation (8.3.3) which gives

. The transition
probabilities in Equation (8.3.4) give a complete description of the stochastic sequence of states in the
stochastic Hopfield net. Note that if T = 0, the probability of a transition which increases the energy E(x)
becomes zero and that of a transition which decreases E(x) becomes one; hence, the net reduces to the
stable deterministic Hopfield net. On the other hand, for T > 0 a transition which increases E(x) is allowed
but with a probability which is smaller than that of a transition which decreases E(x). This last observation
coupled with the requirement that one and only one stochastic unit (chosen randomly and uniformly) is
allowed to flip its state at a given time, guarantees that "thermal" equilibrium will be reached for any T > 0.

It may now be concluded that the serially updated stochastic Hopfield net with the stochastic dynamics in
Equation (8.3.1) is stable (in an average sense) for T 0 as long as its interconnection matrix is symmetric
with positive diagonal elements. In other words, this stochastic net will reach an equilibrium state where the
average value of E is a constant when T is held fixed for a sufficiently long period of time. Now, if a slowly
decreasing temperature T is used with an initially large value, the stochastic Hopfield net becomes
equivalent to a simulated annealing algorithm; i.e., when initialized at a random binary state x(0) the net
will perform a stochastic global search seeking the global minimum of E(x). As discussed in Section 8.2, at
the beginning of the computation, a higher temperature should be used so that it is easier for the states to
escape from local minima. Then, as the computation proceeds, the temperature is gradually decreased
according to a pre-specified cooling schedule. Finally, as the temperature approaches zero, the state, now
placed (hopefully) near the global minimum, will converge to this minimum.

The above stochastic net is usually referred to as the Boltzmann machine because, at equilibrium, the
probability of the states of the net is given by the Boltzmann-Gibbs distribution of Equation (8.2.1), or
equivalently
(8.3.5)

where x and x' are two states in {−1, 1}n which differ in only one bit. Equation (8.3.5) can be easily derived
by employing Equations (8.3.1) and (8.3.3).

8.3.2 Learning in Boltzmann Machines

In the following, statistical mechanics ideas are extended to learning in stochastic recurrent networks or
"Boltzmann learning" ( Hinton and Sejnowski, 1983, 1986; Ackley et al., 1985). These networks consist of
n arbitrarily interconnected stochastic units where the state xi of the ith unit is 1 or −1 with probability
f(netixi) as in Equation (8.3.1). The units are divided into visible and hidden units as shown in Figure 8.3.1.
The hidden units have no direct inputs from, nor do they supply direct outputs to, the outside world. The
visible units may (but need not to) be further divided into input units and output units. The units are
interconnected in an arbitrary way, but whatever the interconnection pattern, all connections must be

symmetric, wij = wji, with wii = 0. The net activity at unit i is given by , where thresholds
(biases) are omitted for convenience and self-feedback is not allowed. This leads to an energy function for
this net given by

(8.3.6)

Figure 8.3.1. A stochastic net with visible and hidden units. Connections between any two units (if they
exist) must be symmetric. The visible units may be further divided into input and output units as illustrated.

whose minima are the stable states x* characterized by = sgn(neti), i = 1, 2, ..., n. According to the
discussion in Section 8.3.1, and because of the existence of an energy function, we find that the states of the
present stochastic net are governed by the Boltzmann-Gibbs distribution of Equation (8.2.1), which may be
adapted for the present network/energy function as
(8.3.7)

where = {−1, +1}n is the set of all possible states. Thus far, we have designed a stable stochastic net
(Boltzmann machine) which is capable of reaching the global minimum (or a good suboptimal minimum)

of its energy function by slowly decreasing the pseudo-temperature T = starting from a sufficiently high
temperature. Therefore, we may view this net as an extension of the stochastic Hopfield net to include
hidden units. The presence of hidden units has the advantage of theoretically allowing the net to represent
(learn) arbitrary mappings/associations. In Boltzmann learning, the weights wij are adjusted to give the
states of the visible units a particular desired probability distribution.

Next, the derivation of a Boltzmann learning rule is presented. This rule is derived by minimizing a
measure of difference between the probability of finding the states of visible units in the freely running net
and a set of desired probabilities for these states. Before proceeding any further, we denote the states of the
visible units by the activity pattern and those of the hidden units by . Let N and K be the numbers of visible
and hidden units, respectively. Then, the set A of all visible patterns has a total of 2N members (state
configurations), and the set G of all hidden patterns has 2K members. The vector x still represents the state
of the whole network and it belongs to the set of 2N+K = 2n possible network states. The probability P() of
finding the visible units in state irrespective of is then obtained as

(8.3.8)

Here, P(x) = P(, ) denotes the joint probability that the visible units are in state and the hidden units are in
state , given that the network is operating in its clamped condition. Also, E(x) = E(, ) is the energy of the
network when the visible units are in state and the hidden units are jointly in state . The term Z is the

denominator in Equation (8.3.7) and should be interpreted as . Equation


(8.3.8) gives the actual probability P() of finding the visible units in state in the freely running network at
"thermal" equilibrium. This probability is determined by the weights wij. Now, assume that we are given a
set of desired probabilities R(), independent of the wij's, for the visible states. Then, the objective is to bring

the distribution as close as possible to R() by adjusting the wij's. A suitable measure of the difference
between P() and R() is the relative-entropy H (see Section 5.2.6)

(8.3.9)

which is positive or zero (H is zero only if R() = P() for all ). Therefore, we may arrive at a learning
equation by performing gradient descent on H:

(8.3.10)

where is a small positive constant. Using Equations (8.3.6) through (8.3.8) and recalling that wij = wji we
find
(8.3.11)

where denotes , and xi (xj) should be interpreted as the state of unit i (j), given that the
visible units are in state and the hidden units are jointly in state . Next, using Equation (8.3.8) and noting

that the quantity is the average <xi xj>,


gives

(8.3.12)

Thus, by substituting Equation (8.3.12) in (8.3.10) leads to the Boltzmann learning rule:

(8.3.13)

where is used and

(8.3.14)

represents the value of <xi xj> when the visible units are clamped in state , averaged over 's according to

their probabilities R(). Note that in Equation (8.3.14), the term was replaced by
according to Bayes' rule.

The first term in the right hand side of Equation (8.3.13) is essentially a Hebbian learning term, with the
visible units clamped. While the second term corresponds to anti-Hebbian learning with the system free
running. Note that learning converges when the free unit-unit correlations are equal to the clamped ones. It
is very important that the correlations in the Boltzmann learning rule be computed when the system is in

thermal equilibrium at temperature T = > 0, since the derivation of this rule hinges on the Boltzmann-
Gibbs distribution of Equation (8.3.7). At equilibrium, the state x fluctuates and we measure the
correlations <xi xj> by taking a time average of xi xj. This must be done twice; once with the visible units

clamped in each of their states for which is non-zero, and once with the 's unclamped. Thermal
equilibrium must be reached for each of these computations.
As the reader may have already suspected by examining Equations (8.3.13) and (8.3.14), Boltzmann
learning is very computationally intensive. Usually, one starts with a high temperature (very small ) and
chooses a cooling schedule. At each of these temperatures the network is allowed to follow its stochastic
dynamics according to Equation (8.3.1) or, equivalently, many units are sampled and are updated according
to Equation (8.3.4). The temperature is lowered slowly according to the preselected schedule until T
approaches zero and equilibrium is reached. This simulated annealing search must be repeated with
clamped visible units in each desired pattern and with unclamped visible units. In the computation of
<xi xj>clamped, the states are clamped to randomly drawn patterns from the training set according to a given
probability distribution R(). For each such training pattern, the network seeks equilibrium following the
same annealing schedule. The weights are updated only after enough training patterns are taken. This whole
process is repeated many times to achieve convergence to a good set of weights wij.

The learning rule described above is compatible with pattern completion, in which a trained net is expected
to fill in missing bits of a partial pattern when such a pattern is clamped on the visible nodes. This is
reminiscent of retrieval in the dynamic associative memories of Chapter 7. Note that the presence of hidden
units allows for high memory capacity. During retrieval, the weights derived in the training phase are held
fixed and simulated annealing-based global retrieval is used as discussed in Section 8.3.1. Here, the visible
units are clamped at corresponding known bits of a noisy/partial input pattern. Starting from a high
temperature, the net follows the stochastic dynamics in Equation (8.3.1) or equivalently, state transitions are
made according to Equation (8.3.4). The temperature is gradually lowered according to an appropriate
schedule until the dynamics become deterministic at T = 0 and convergence to the "closest" pattern (global
solution) is (hopefully) achieved.

We may also extend Boltzmann learning to handle the association of input/output pairs of patterns, as in
supervised learning in multi-layer perceptron nets. Here, we need to distinguish between two types of
visible units: input and output. Let us represent the input units, the output units, and the hidden units by the
states , , and , respectively. Then we want the network to learn associations . We may now pose the
problem as follows: for each we want to adjust the wij's such that the conditional probability distribution
P(|) is as close as possible to a desired distribution R(|). Assuming that the 's occur with probabilities p(), a
suitable error measure is

(8.3.15)

which leads to the Boltzmann learning rule (Hopfield, 1987)

(8.3.16)

In Equation (8.3.16), both the inputs and outputs are clamped in the Hebbian term, while only the input
states are clamped in the anti-Hebbian (unlearning) term, with averages over the inputs taken in both cases.
Examples of applications of learning Boltzmann machines can be found in Ackley et al. (1985), Parks
(1987), Sejnowski et al. (1986), Kohonen et al. (1988), and Lippmann (1989). Theoretically, Boltzmann
machines with learning may outperform gradient-based learning such as backprop. However, the
demanding computational overhead associated with these machines would usually render them impractical
in software simulations. Specialized electronic ( Alspector and Allen, 1987) and optoelectronic ( Farhat,
1987; Ticknor and Barrett, 1987) hardware has been developed for the Boltzmann machine. However, such
hardware implementations are still experimental in nature. Variations and related networks can be found in
Derthick (1984), Hopfield et al. (1983), Smolensky (1986), van Hemman et al. (1990), Galland and Hinton
(1991), and Apolloni and De Falco (1991).

8.4 Mean-Field Annealing and Deterministic Boltzmann Machines


Mean field annealing ( Soukoulis et al., 1983; Bilbro et al., 1989) is a deterministic approximation to
simulated annealing which is significantly more computationally efficient (faster) than simulated annealing
( Bilbro et al., 1992). Instead of directly simulating the stochastic transitions in simulated annealing, the
mean (or average) behavior of these transitions is used to characterize a given stochastic system. Because
computations using the mean transitions attain equilibrium faster than those using the corresponding
stochastic transitions, mean field annealing relaxes to a solution at each temperature much faster than does
stochastic simulated annealing. This leads to a significant decrease in computational effort. The idea of
using a deterministic mean-valued approximation for a system of stochastic equations to simplify the
analysis has been adopted at various instances in this book (e.g., see Section 4.3). Generally speaking, such
approximations are adequate in high dimensional systems of many interacting units (states) where each
state is a function of all or a large number of other states allowing the central limit theorem to be used (see
Problem 7.5.9). In this section, we restrict our discussion of mean field annealing to the Boltzmann machine
which was introduced in the previous section.

8.4.1 Mean-Field Retrieval

Consider a stochastic Hopfield net with the stochastic dynamics given in Equation (8..1). The evolution of
the stochastic state xi of unit i depends on neti which involves variables xj that themselves fluctuate between
−1 and +1. Let us transform the set of n stochastic equations in xi to n deterministic equations in <xi>
governing the means of the stochastic variables. If we focus on a single variable xi and compute its average
by assuming no fluctuations of the other xj's (this allows us to replace neti by its average <neti>), 3 we get

(8.4.1)

The system is now deterministic and is approximated by the n mean-field equations represented by
Equation (8.4.1). It is important to point out that Equation (8.4.1) is meaningful only when the network is at
thermal equilibrium, which means that all the quantities <xi> converge (become time-independent).
Luckily, the stochastic Hopfield net is guaranteed to reach thermal equilibrium, as was discussed in Section
8.3.1.

At thermal equilibrium, the stochastic Hopfield net fluctuates about the constant average values in Equation
(8.4.1). The mean state <x> is thus one of the local minima of the quadratic energy function E(x) at

temperature T = . The location of this minimum may then be computed by solving the set of n nonlinear
mean-field equations. An alternative way is to solve for <x> by gradient descent on E(x) from an initial
random state. This is exactly what the deterministic continuous Hopfield net with hyperbolic tangent
activations does [recall Equation (7.1.25)]. In fact, Equation (8.4.1) has the same form as the equation
governing the equilibrium points of the Hopfield net (Bilbro et al., 1989). To see this, we recall from
Equation (7.1.19) the dynamics of the ith unit in a continuous-state electronic Hopfield net, namely

(8.4.2)
The equilibria of Equation (8.4.2) are given by setting to zero, giving

(8.4.3)

Assuming the common choice x = f(u) = tanh(u) in Equation (8.4.3) gives

(8.4.4)

which becomes identical in form to Equation (8.4.1) after setting i = 1.

The electronic continuous-state (deterministic) Hopfield net must employ high-gain amplifiers (large ) in
order to achieve a binary-valued solution as is normally generated by the original stochastic net. However,
starting with a deterministic net having large may lead to a poor local minimum as is the case with a
stochastic net whose "temperature" T is quenched. Since annealing a stochastic net increases the probability
that the state will converge to a global minimum, we may try to reach this minimum by annealing the
approximate mean-field system. This approach is known as mean-field annealing. Mean-field annealing can
be realized very efficiently in electronic (analog) nets like the one in Figure (7.1.3) where dynamic
amplifier gains allow for a natural implementation of continuous cooling schedules (Lee and Sheu, 1993).
This is referred to as "hardware annealing."

The deterministic Boltzmann machine is applicable only to problems involving quadratic cost functions.
However, the principles of mean-field annealing may still be applied to more general cost functions with
substantial savings in computing time by annealing a steady-state average system as opposed to a stochastic
one. In addition to being faster than simulated annealing, mean-field annealing has proved to lead to better
solutions in several optimization problems (Van den Bout and Miller, 1988, 1989; Cortes and Hertz, 1989;
Bilbro and Snyder, 1989).

8.4.2 Mean-Field Learning

The excessive number of calculations required by Boltzmann machine learning may be circumvented by
extending the above mean-field method to adapting the wij's (Peterson and Anderson, 1987). Here, the
correlations <xi xj> are approximated by sisj where si is given by the average equation:

(8.4.5)

as in Equation (8.4.1), but with Ii = 0 for convenience. Equation (8.4.5) applies for free units (hidden units
and unclamped visible units). For a clamped visible unit i, si is set to 1 (the value that the unit's output is
supposed to be clamped at). As required by Boltzmann learning, the correlations <xi xj> should be
computed at thermal equilibrium. This means that we must use approximation terms sisj where the si's (sj's)
are solutions to the n nonlinear equations represented by Equation (8.4.5). One may employ an iterative
method to solve for the unclamped states si according to
(8.4.6)

combined with annealing (gradual increasing of ). Peterson and Anderson (1987) reported that this mean-
field learning is 10 to 30 times faster than simulated annealing on some test problems with somewhat better
results.

8.5 Genetic Algorithms in Neural Network Optimization

Genetic algorithms are global optimization algorithms based on the mechanics of natural selection and
natural genetics. They employ a structured yet randomized parallel multipoint search strategy which is
biased towards reinforcing search points of "high fitness"; i.e., points at which the function being
minimized have relatively low values. Genetic algorithms are similar to simulated annealing (Davis, 1987)
in that they employ random (probabilistic) search strategies. However, one of the apparent distinguishing
features of genetic algorithms is their effective implementation of parallel multipoint search. This section
presents the fundamentals of genetic algorithms and shows how they may be used for neural networks
training.

8.5.1 Fundamentals of Genetic Algorithms

The genetic algorithm (GA) as originally formulated by Holland (1975) was intended to be used as a
modeling device for organic evolution. Later, De Jong (1975) demonstrated that the GA may also be used
to solve optimization problems, and that globally optimal by results may be produced. Although there has
been a lot of work done on modifications and improvements to the method, this section will present the
standard genetic algorithm, and the analysis will follow the presentation given in Goldberg (1983).

In its simplest form, the standard genetic algorithm is a method of stochastic optimization for discrete
programming problems of the form:

Maximize f(s) (8.5.1)

subject to s = {0, 1}n

In this case, f : R is called the fitness function, and the n-dimensional binary vectors in are called strings.
The most noticeable difference between the standard genetic algorithm and the methods of optimization
discussed earlier is that at each stage (iteration) of the computation, genetic algorithms maintain a collection
of samples from the search space rather than a single point. This collection of samples is called a
population of strings.

To start the genetic search, an initial population of, say M binary strings: S(0) = {s1, s2, ..., sM} , each with
n bits, is created. Usually, this initial population is created randomly because it is not known a priori where
the globally optimal strings in are likely to be found. If such information is given, though, it may be used to
bias the initial population towards the most promising regions of . From this initial population, subsequent
populations S(1), S(2), ... S(t), ... will be computed employing the three genetic operators of selection,
crossover, and mutation.

The standard genetic algorithm uses a roulette wheel method for selection, which is a stochastic version of
the survival of the fittest mechanism. In this method of selection, candidate strings from the current
generation S(t) are selected to survive to the next generation S(t+1) by designing a roulette wheel where
each string in the population is represented on the wheel in proportion to its fitness value. So those strings
which have a high fitness are given a large share of the wheel, while those strings with low fitness are given
a relatively small portion of the roulette wheel. Finally, selections are made by spinning the roulette wheel
M times and accepting as candidates those strings which are indicated at the completion of the spin.

Example 8.5.1: As an example, suppose M = 5, and consider the following initial population of strings:
S(0) = {(10110), (11000), (11110), (01001), (00110)}. For each string si in the population, the fitness may
be evaluated: f(si). The appropriate share of the roulette wheel to allot the ith string is obtained by dividing

the fitness of the ith string by the sum of the fitnesses of the entire population: . Figure
8.5.1 shows a listing of the population with associated fitness values, and the corresponding roulette wheel.

(a) (b)

Figure 8.5.1. (a) A listing of the 5-string population and the associated fitness values. (b) Corresponding
roulette wheel for string selection. The integers shown on the roulette wheel correspond to string labels.

To compute the next population of strings, the roulette wheel is spun five times. The strings which are
chosen from this method of selection, though, are only candidate strings for the next population. Before
actually being copied into the new population, these strings must undergo crossover and mutation.

Pairs of the M (assume M even) candidate strings which have survived selection are next chosen for
crossover, which is a recombination mechanism. The probability that the crossover operator is applied will
be denoted by Pc. Pairs of strings are selected randomly from S(t), without replacement, for crossover. A
random integer k, called the crossing site, is chosen from {1, 2, ... n − 1} and then the bits from the two
chosen strings are swapped after the kth bit with a probability Pc, this is repeated until S(t) is empty. For
example, Figure 8.5.2 illustrates a crossover for two 6-bit strings. In this case, the crossing site k is 4, so the
bits from the two strings are swapped after the fourth bit.

(a) (b) (c)

Figure 8.5.2. An example of a crossover for two 6-bit strings. (a) Two strings are selected for crossover. (b)
A crossover site is selected at random. In this case k = 4. (c) Now swap the two strings after the kth bit.

Finally, after crossover, mutation is applied to the candidate strings. The mutation operator is a stochastic
bit-wise complementation, applied with uniform probability Pm. That is, for each single bit in the
population, the value of the bit is flipped from 0 to 1 or from 1 to 0 with probability Pm. As an example,
suppose Pm = 0.1, and the string s = 11100 is to undergo mutation. The easiest way to determine which bits,
if any, to flip is to choose a uniform random number r [0, 1] for each bit in the string. If r Pm, then the bit
is flipped, otherwise no action is taken. For the string s above, suppose the random numbers (0.91, 0.43,
0.03, 0.67, 0.29) were generated, then the resulting mutation is shown below. In this case, the third bit was
flipped.

After mutation, the candidate strings are copied into the new population of strings: S(t+1), and the whole
process is repeated by calculating the fitness of each string, using a roulette wheel method of selection, and
applying the operators of crossover and mutation.

Although the next section presents an analytical analysis of the action of the genetic operators, some
qualitative comments may be helpful first. In the roulette wheel method of selection, it is clear that only the
above-average strings will tend to survive successive populations. Applying only selection to a population
of strings results in a sort of MAX operation, that is, the operation of selecting the maximum from a set of
numbers. It has been shown (Suter and Kabrisky, 1992) that a MAX operation may be implemented by
successive application of a theorem by Hardy et al. (1952), which states that for a set of non-negative real
numbers Q R, then

min(Q) < ave(Q) < max(Q) (8.5.2)

(where it is assumed that not all of the elements of Q are equal). So the maximum value may be obtained by
simply averaging the elements of Q and excluding all elements which are below average. The remaining
subset may be averaged, and the process repeated until only the maximum value survives. The roulette
wheel method of selection may be thought of as a "soft" stochastic version of this MAX operation, where
strings with above-average strengths tend to survive successive roulette wheel spins.

The reason that the stochastic version is used, rather than just deterministically always choosing the best
strings to survive, gets at the crux of the underlying theory and assumptions of genetic search. This theory is
based on the notion that even strings with very low fitness may contain some useful partial information to
the search. For this reason, though the survival probability is small, these lowly fit strings are not altogether
discarded during the search.

Of the three genetic operators, the crossover operator is the most crucial in obtaining global results.
Crossover is responsible for mixing the partial information contained in the strings of the population. In the
next section, it will be conjectured that this type of mixing will lead to the formation of optimal strings.
Based on empirical evidence, it has been found (De Jong, 1975; Grefenstette, 1986; Schaffer et al., 1989)
that reasonable values for the probability of crossover are 0.6 Pc 0.99.

Unlike the previous two operators which are used to fully exploit and possibly improve the structure of the
best strings in the current population, the purpose of the mutation operator is to diversify the search and
introduce new strings into the population in order to fully explore the search space. The creation of these
new strings is usually required because of the vast differences between the number of strings in the
population, M, and the total number of strings in the entire search space , 2n. Typically, M is chosen to be
orders of magnitude smaller than 2n, so by selecting and recombining (crossing) the M strings in a given
population, only a fraction of the total search space is explored. So mutation forces diversity in the
population and allows more of the search space to be sampled, thus allowing the search to overcome local
minima.

Mutation, though, cuts with a double edge sword. Applying mutation too frequently will result in destroying
the highly fit strings in the population, which may slow and impede convergence to a solution. Hence,
although necessary, mutation is usually applied with a small probability. Empirically, it has been found (De
Jong, 1975; Grefenstette, 1986; Schaffer et al., 1989) that reasonable values for the probability of mutation
are . Bäck (1993) presented a theoretical analysis where he showed that Pm = is
the best choice when the fitness function f is unimodal. However, for a multimodal fitness function, Bäck
shows that a dynamic mutation rate may overcome local minima, whereas a fixed mutation rate may not.
The dynamic mutation rate may be implemented by following a schedule where Pm is slowly decreased

towards from an initial value Pm(0), such that 1 > Pm(0) >> . This is analogous to decreasing the
temperature in simulated annealing. Davis and Principe (1993) showed that the asymptotic convergence of
a GA with a suitable mutation probability schedule can be faster than that of simulated annealing.

The Fundamental Theorem of Genetic Algorithms

We have just described the mechanisms of the standard genetic algorithm. Later, we will demonstrate by
example that the GA actually works, i.e., global solutions to multimodal functions may be obtained. The
question here is: Why does the standard GA work? Surprisingly, although the literature abounds with
applications which demonstrate the effectiveness of GA search, the underlying theory is far less understood.
The theoretical basis which has been established thus far (Goldberg, 1983; Thierens and Goldberg, 1993),
though, will be described next.

To analyze the convergence properties of the GA, it is useful to define the notion of schema (plural,
schemata). A schema H is a structured subset of the search space . The structure in H is provided by string
similarities at certain fixed positions of all the strings in H. The string positions which aren't fixed in a
given schema are usually denoted by ∗. For example, the schema H = ∗11∗0 is the collection of all 5-bit
binary strings which contain a 1 in the second and third string positions, and a 0 in the last position, that is,

H = {(01100), (01110), (11100), (11110)}

In total, there are 3n different schemas possible: all combinations of the symbols {0, 1 ∗}. Since there are
only 2n different strings in , it is clear that a given string in will belong to several different schema. More
precisely, each string in will belong to 2n different schema.

To prove the fundamental theorem of genetic algorithms, it is necessary to investigate the effect that
selection, cross-over, and mutation has on a typical population of strings. More generally, it is possible to
determine the effect of these genetic operators on the schemata of a typical population. The following
notation will be useful: The order of a schema H is the number of fixed positions over the strings in H, and
the defining length of a schema is the distance between the first and last fixed positions of the schema. The
order and defining length are denoted by o(H) and (H), respectively. For example, for the schema ∗11∗∗0,
then o(H) = 3 because there are 3 fixed positions in wazzu H, and (H) = 6 − 2 = 4, because 2 and 6 are the
indices of the first and last fixed string positions in H, respectively.

Consider S(t), the population of strings at time t, and consider the collection of schemata which contain one
or more of the strings in this population. For each such schema H, denote by m(H, t) the number of strings
in the population at time t which are also in H. We want to study the long term behavior of m(H, t) for those
schema H which contain highly fit strings.

Using the roulette wheel method of selection outlined in the previous section, the expected number of
strings in H S(t + 1) given the quantity m(H, t) is easily seen to be:

(8.5.3)
where f(H) is the average fitness of the strings H S(t + 1), and is the average fitness

of all the strings in the population at time t. Assuming remains relatively constant, the above
equation has the form of a linear difference equation: x(t + 1) = ax(t). The solution of this equation is well
known and given by: x(t) = atx(0), t = 0, 1, 2, ..., which explodes if a > 1 and decays if a < 1. By comparing
with Equation (8.5.3), we see that the number of strings in the population represented by H is expected to

grow exponentially if , i.e., if the average fitness of the schema is higher than the average
fitness of the entire population. Conversely, the number of strings in the population represented by H will

decay exponentially if .

Now consider the effect of crossover on a schema H with m(H, t) samples in the population at time t. If a
string belonging to H is selected for crossover, one of two possibilities may occur: (1) The crossover
preserves the structure of H; in this case, the schema is said to have survived crossover, or (2) The crossover
destroys the structure, and hence the resulting crossed string will not belong to H at time t + 1. It's easy to
see by example which schema are likely to survive crossover. Consider the two schema A and B shown
below.

A = 11, B = 01

Claim: Schema A will not survive crossover if the cross site k is 1, 2, 3, or 4. To see this, just take a
representative example from A, say 100011. Making the reasonable assumption that the mating string is not
identical to the example string at precisely the fixed string positions of A (i.e., the first and fifth positions),
then upon crossover with cross site 1, 2, 3, or 4, the fixed 1 at the fifth string position will be lost, and the
resulting string will not belong to A.

On the other hand, schema B may be crossed at sites 1, 3, 4, and 5 and still preserve the structure of B,
because, in this case, the 01 fixed positions will lie on the same side of the crossing site and will be copied
into the resulting string. The only crossing site which will destroy the structure of schema B would be k = 2.

By noticing the difference in defining length for these two schema: (A) = 4 and (B) = 1, the following
conclusion may be made: A schema survives crossover when the cross site is chosen outside its defining

length. Hence, the probability that a schema H will survive crossover is given by . But since the
crossover operator is only applied with probability Pc, the following quantity is a lower bound for the

crossover survival probability: .

Finally, the mutation operator destroys schema only if it is applied to the fixed positions in the schema.
Hence the probability that a schema H will survive mutation is given by (1 − Pm)o(H). For small values of
Pm, the binomial theorem may be employed to obtain the approximation: (1−Pm)o(H) 1 − o(H)Pm. The
Fundamental Theorem of Genetic Algorithms may now be given.

Theorem. (Goldberg, 1983) By using the selection, crossover, and mutation of the standard genetic
algorithm, then short, low order, and above average schemata receive exponentially increasing trials in
subsequent populations.

Proof. Since the operations of selection, crossover, and mutation are applied independently, then the
probability that a schema H will survive to the next generation may be obtained by a simple multiplication
of the survival probabilities derived above:
(8.5.4)

By neglecting the cross product terms, the desired result is obtained:

(8.5.5)

The short, low order, and above average schemata are called building blocks, and the Fundamental Theorem
indicates that building blocks are expected to dominate the population. Is this good or bad in terms of the
original goal of function optimization? The above theorem does not answer this question. Rather, the
connection between the Fundamental Theorem and the observed optimizing properties of the genetic
algorithm is provided by the following conjecture:

The Building Block Hypothesis. The globally optimal strings in may be partitioned into substrings which
are given by the bits of the fixed positions of building blocks.

Stated another way, the hypothesis is that the partial information contained in each of the building blocks
may be combined to obtain globally optimal strings. If this hypothesis is correct, then the Fundamental
Theorem implies that the GA is doing the right thing in allocating an exponentially increasing number of
trials to the building blocks, because some arrangement of the building blocks is likely to produce a
globally optimal string.

Unfortunately, although the building block hypothesis seems reasonable enough, it does not always hold
true. Cases where the hypothesis fails can be constructed. It is believed (Goldberg, 1983), though, that such
cases are of "needle in the haystack" type, where the globally optimal strings are surrounded (in a Hamming
distance sense) by the worst strings in . Such problems are called GA-deceptive because by following the
building block hypothesis, the GA is lead away from the globally optimal solutions rather than towards
them. Current trends in GA research (Kuo and Hwang, 1993; Qi and Palmieri, 1993) include modifying the
standard genetic operators in order to enable the GA to solve such "needle in the haystack" problems, and
hence shrink in size the class of GA-deceptive problems.

The above analysis is based entirely on the schema in the population rather than the actual strings in the
population. The GA, though, processes strings--not schema. This type of duality is called implicit
parallelism by Holland (1975). The implicit parallelism notion means that a larger amount of information is
obtained and processed at each generation by the GA than would appear by simply looking at the
processing of the M strings. This additional information comes from the number of schema that the GA is
processing per generation. The next question is: how many schema are actually processed per generation by
the GA? Clearly, in every population of M strings, there are between 2n and 2nM schema present (if all the
strings in the population are the same, then there are 2n schema; if all the strings are different, there may be
at most 2nM schema). Because the selection, cross-over, and mutation operations tend to favor certain
schema, then not all of the schema in the population will be processed by the GA. Holland (1975) estimated
that O(M3) schemata per generation are actually processed in a useful manner (see also Goldberg, 1989).
Hence, implicit parallelism implies that by processing M strings, the GA actually processes O(M3) schemata
for free!

To apply the standard genetic algorithm to an arbitrary optimization problem of the form:

(8.5.6)

it is necessary to establish the following:


1. A correspondence between the search space and some space of binary strings . That is, an invertible
mapping of the form D: .

2. An appropriate fitness function f(s), such that the maximizers of f correspond to the minimizers of y.

This situation is shown schematically in Figure 8.5.3.

Figure 8.5.3. A schematic representation of the process of matching an optimization problem with the
genetic algorithm framework.

Example 8.5.2: As an example of the solution process, consider the function shown in Figure 8.1.1, and the
following optimization problem:

(8.5.7)

This is the same function considered earlier in Example 8.1.1 and plotted in Figure 8.1.1. This function has
two local minima x 0.058, and x 0.091, as well as a global minimum at x* 0.223. The standard genetic
algorithm will be used to obtain the global minimum of this function.

The first thing to notice is that the search space here is real-valued: = [0.05, 0.5]. As mentioned above, some
transformation is needed in order to encode/decode the real-valued search space into some space of binary
strings . In this example, a binary search space consisting of six-bit strings, i.e., = {0, 1}6 was used, with
the decoding transformation given by

(8.5.8)

where d(s) is the ordinary decimal representation of the 6-bit binary string s. For example, the decimal
representation of 000011 is 3, so D (000011) = 0.071; as would be expected, the two end-points are mapped
in the following way: D (000000) = 0.05 and D (111111) = 0.5.
To establish an appropriate fitness function for this problem, recall that the problem is to minimize y(x), but
maximize the fitness function f(s). So some sort of inverting transformation is required here. In this
example, the following fitness function is used:

(8.5.9)

where z = D(s).

Before applying the standard genetic algorithm, values for M, Pc, and Pm must be chosen. The values for
these parameters are usually determined empirically by running some trial simulations. However, one may
first try to choose Pc and Pm in the ranges 0.6 Pc 0.99 and 0.01 Pm 0.001 respectively, as mentioned
earlier. As for the value of M, empirical results suggest that n M 2n is a good choice (Alander, 1992. See
also Reeves, 1993). Besides the above parameters, some stopping or convergence criterion is required.
Although there are several different convergence criteria which may be used, the criterion used here is to
stop when the population is sufficiently dominated by a single string. In this case, convergence is obtained
when a single string comprises 80 percent or more of the population.

Two simulations will be described below. In the first simulation, an initial population of strings was
generated uniformly over the search space, and then, as usual, the genetic operators of selection, crossover,
and mutation were applied until the convergence criterion was met. The following parameters were used: M
= 10, Pc = 0.8, and Pm = 0.01, and the results of this simulation are shown in Table 8.5.1. In this table, a
listing of the population is shown for the generations at times t = 0, 1, 2, and 3. The decoded real-value for
each string is also shown, as well as the associated fitness values. The number in parenthesis beside each
string shows the number of multiplicities of the string appearing in the total population of 10 strings. Notice
how the population converges to populations dominated by highly fit strings. After the fourth iteration (t =
4), the population is dominated by the string s* = 011001, and the population has converged. The string s* is
decoded to the value x* = 0.23, which is close to the globally optimal solution of x* 0.223. Note that better
accuracy may be obtained by using a more accurate encoding of the real-valued search space. That is, by
using a GA search space with strings of higher dimension, for example, = {0,1}10, or = {0, 1}20, whatever
accuracy is required.

Population Decoded Value Fitness


S(t) = {s1, ..., s10} x = D(s) 1 − y(x)
000010 (1) 0.064 0.994
000110 (1) 0.092 1.091
001010 (1) 0.120 0.892
010001 (1) 0.170 1.063
011001 (1) 0.226 1.217
100000 (1) 0.275 1.131
100111 (1) 0.324 0.981
t=0
101110 (1) 0.373 0.833
110101 (1) 0.423 0.704
111100 (1) 0.472 0.597
010101 (1) 0.198 1.185
110101 (1) 0.423 0.704
101010 (1) 0.345 0.916
t=1 001010 (1) 0.120 0.892
011001 (3) 0.226 1.217
010001 (3) 0.170 1.064
101001 (1) 0.338 0.938
011001 (4) 0.226 1.217
111001 (1) 0.451 0.640
t=2 010001 (3) 0.170 1.063
110101 (1) 0.423 0.704
010001 (4) 0.170 1.063
t=3
011001 (6) 0.226 1.217

Table 8.5.1. A listing of the first four populations in the first simulation for Example
8.5.2. The numbers in parenthesis show the multiplicity of the string in the total
population of 10 strings.

The second simulation of this problem demonstrates a case where mutation is necessary
to obtain the global solution. In this simulation, the initial population was created with all
strings near the left endpoint x = 0.05. The following parameters were used here: M = 10,
Pc = 0.9, and Pm = 0.05. The increased mutation and crossover rates were used to encourage diversity in the
population. This helps the genetic algorithm branch out to explore the entire space. In fact, if mutation was
not used, i.e., Pm = 0, then the global solution could never be found by the GA. This is because the initial
population is dominated by the schema 00000∗∗ which is not a desirable building block because the fitness
of this schema is relatively small. Applying selection and crossover won't help because no new schemata
would be generated. The results of the simulation are shown in Table 8.5.2. This time, the GA took 44
iterations to converge to the solution s* = 011000, with corresponding real-value: x* = 0.219.

Population Decoded Value Fitness


P(t) = {s1, ..., s10} x = D(s) 1 − y(x)
000000 (3) 0.050 0.954
000001 (1) 0.057 1.055
t=0 000010 (3) 0.064 0.994
000011 (3) 0.071 0.929
000010 (2) 0.064 0.994
000110 (1) 0.922 1.091
011010 (2) 0.233 1.213
001011 (1) 0.127 0.873
100010 (1) 0.289 1.090
t=5 010010 (1) 0.177 1.103
001000 (1) 0.106 0.999
001110 (1) 0.148 0.935
111000 (1) 0.444 0.656
010010 (4) 0.177 1.103
000010 (1) 0.064 0.994
t = 30 011010 (2) 0.233 1.213
010000 (2) 0.163 1.021
010100 (1) 0.191 1.164
t = 44
011000 (9) 0.219 1.217

Table 8.5.2. A listing of the population at various stages of the computation for the
second simulation of Example 8.5.2. In this simulation, the initial population of strings
was concentrated at the left end-point of the search space .
Although the previous example used a one-dimensional objective function, multi-
dimensional objective functions y: Rn R may also be mapped onto the genetic algorithm framework
by simply extending the length of the binary strings in to represent each component of the points x = (x1, ...,
xn) in . That is, each string s will consist of n substrings s = (s1, ..., sn) , where si is the binary encoding for
the ith component of x. A decoding transformation may then be applied to each substring separately: D(s) =
(D1(s1), ..., Dn(sn)). Although not necessary, decoding each component separately might be desirable in
certain applications. For example, suppose x = (x1, x2), and = [0, 1] [0, 100]. To obtain the same level of
accuracy for the two variables, then more bits would have to be allotted the substring representing the
second component of x, because it has a much larger range of values than the first component. Hence, in
this case, the decoding transformations D1(s1) and D2(s2) would be different.

The crossover operator may also be slightly modified to exploit the structure of the substring decomposition
of s. Instead of choosing a single crossing site over the entire string as shown below for a string of the form
s = (s1, s2, s3, s4),

crossing sites may be chosen for each of the substrings, and the crossover occur locally at each substring.
This type of crossover (known as multiple point crossover) is shown below:

A large number of other variations of and modifications to the standard GA have been reported in the
literature. For examples, the reader is referred to Chapter 5 in Goldberg (1989) and to the proceedings of the
International Conference on Genetic Algorithms (1989-1993).

The general-purpose nature of GAs allows them to be used in many different optimization tasks. As was
discussed earlier, an arbitrary optimization problem with objective function y(x) can be mapped onto a GA
as long as one can find an appropriate fitness function which is consistent with the optimization task. In
addition, one needs to establish a correspondence (an invertible mapping) between the search space in x ()
and the GA search space () which is typically a space of binary strings. Both of these requirements are
possible to satisfy in many optimization problems. For example, one may simply use the function y itself as
the fitness function if y(x) is to be maximized, or the fitness function f = −y may be used if y(x) is to be
minimized. On the other hand, the mapping between the original search space and the GA space can vary
from a simple real-to-binary encoding, to more elaborate encoding schemes. Empirical evidence suggests
that different choices/combinations of fitness functions and encoding schemes can have significant effect on
the GA's convergence time and solution quality (Bäck, 1993). Unfortunately, theoretical results on the
specification of the space to be explored by a GA are lacking (De Jong and Spears, 1993).

8.5.2 Application of Genetic Algorithms to Neural Networks

There are various ways of using GA-based optimization in neural networks. The most obvious way is to use
a GA to search the weight space of a neural network with a predefined architecture (Caudell and Dolan,
1989; Miller et al., 1989; Montana and Davis, 1989; Whitley and Hanson, 1989). The use of GA-based
learning methods may be justified for learning tasks which require neural nets with hidden units (e.g.,
nonlinearly separable classification tasks, nonlinear function approximation, etc.), since the GA is capable
of global search and is not easily fooled by local minima. Also, GAs are useful for use in nets consisting of
units with non-differentiable activation functions (e.g., LTG's), since the fitness function need not be
differentiable.

In supervised learning one may readily identify a fitness function as −E where E = E(w) may be the sum of
squared error criterion as in Equation (5.1.13) or the entropy criterion of Equation (5.2.16). As for
specifying the search space for the GA, the complete set of network weights is coded as a binary string si
with associated fitness f(si) = −E(D(si)). Here, D(si) is a decoding transformation.
An example of a simple two-input two-unit feedforward net is shown in Figure 8.5.4. In this example, each
weight is coded as a 3-bit signed-binary sub-string where the left most bit encodes the sign of the weight
(e.g., 110 represents −2 and 011 represents +3)

Figure 8.5.4. A simple two layer feed-forward net used to illustrate weight coding in a GA.

Now, we may generate an appropriate GA-compatible representation for the net in Figure 8.5.4 as a
contiguous sequence of substrings s = (101010001110011) which corresponds to the real valued weight
string (w11, w12, w13, w21, w22) = (−1, 2, 1, −2, 3). Starting with a random population of such strings
(population of random nets), successive generations are constructed using the GA to evolve new strings out
of old ones. Thus, strings whose fitness are above average (more specifically, strings which meet the
criteria of the Fundamental Theorem of GAs) tend to survive, and ultimately, the population converges to
the "fittest" string. This string represents, with a high probability, the optimal weight configuration for the
learning task at hand for the predefined net architecture and predefined admissible weight values. It is
interesting to note here how the cross-over operation may be interpreted as a swapping mechanism where
parts (individual units, group of units, and/or a set of weights) of fit networks are interchanged in hope of
producing a network with even higher fitness.

GA's are also able to deal with learning in generally interconnected networks including recurrent nets.
Recurrent networks pose special problems for gradient descent learning techniques (refer to Section 5.4)
that are not shared by GA's. With gradient descent learning it is generally necessary to correlate causes and
effects in the network so that units and weights that cause the desired output are strengthened. But with
recurrent networks the cause of a state may have occurred arbitrarily far in the past. On the other hand, the
GA evolves weights based on a fitness measure of the whole network (a global performance measure), and
the question of what caused any particular network state to occur is considered only in that the resulting
state is desirable (Wieland, 1991). This inherent strength of GA's is in some ways also a weakness. By
ignoring gradient (or more generally cause and effect) information when it does exist, the GA's can become
slow and inefficient. There are also large costs in speed and storage for working with a whole population of
networks, which can make standard GA's impractical for evolving optimal designs for large networks.

Thus, when gradient information exists, particularly if it is readily available, one can use such information
to speedup the GA search. This leads to hybrid GA/gradient search where a gradient descent step may be
included as one of the genetic operators (Montana and Davis, 1989). A more general view of the advantages
of the marriage of GA and gradient descent can be seen based on the relationship between evolution and
learning. Belew et al. (1990) have demonstrated the complementary nature of evolution and learning: the
presence of learning facilitates the process of evolution [see also Smith (1987); Hinton and Nowlan (1987);
Nolfi et al. (1990); Keeling and Stork (1991)]. In the context of our discussion, genetic algorithms can be
used to provide a model of evolution, and supervised learning (or some other learning paradigm e.g.,
reinforcement or unsupervised learning) may be used to provide simple but powerful learning mechanisms.
Thus, the presence of learning makes evolution much easier; all evolution has to do is to evolve (find) an
appropriate initial state of a system, from which learning can do the rest (much like teaching a child who
already has an "evolved" potential for learning!). These ideas motivate the use of hybrid learning methods
which employ GA and gradient-based searches. A specific example of such a method is presented in the
next section.

Potential applications of GA's in the context of neural networks include evolving appropriate network
structures and learning parameters (Harp et al., 1989, 1990) which optimizes one or more network
performance measure. These measures may include fast response (requires minimizing network size), VLSI
hardware implementation compatibility (requires minimizing connectivity), and real-time learning (requires
optimizing the learning rate). Another interesting application of GA's is to evolve learning mechanisms
(rules) for neural networks. In other words, evolution is recruited to discover a process of learning.
Chalmers (1991) reported an interesting experiment where a GA with proper string representation applied
to a population of single-layer neural nets evolved the LMS learning rule [Equation (3.1.35)] as an optimal
learning rule.
8.6 Genetic Algorithm Assisted Supervised Learning

In the previous section, a method for training multilayer neural nets was described which uses a GA to
search for optimal weight configurations. Here, an alternative learning method is described which performs
global search for finding optimal targets for hidden units based on a hybrid GA/gradient descent search
strategy. This method is a supervised learning method which is suited for arbitrarily interconnected
feedforward neural nets. In the following, this hybrid learning method is described in the context of a
multilayer feedforward net having a single hidden layer.

In Section 2.3, the universal approximation capabilities of single hidden layer feedforward nets was
established for a wide variety of hidden unit activation functions, including the threshold activation
function. This implies that an arbitrary nonlinearly separable mapping can always be decomposed into two
linearly separable mappings which are realized as the cascade of two single-layer neural nets, as long as the
first layer (hidden layer) has a sufficient number of nonlinear units (e.g., sigmoids or LTG's).

In supervised learning, a desired target vector (pattern) is specified for each input vector in a given training
set. A linearly separable set of training input/target pairs can be efficiently learned in a single layer net
using the gradient-based LMS or delta learning rules [See Equations (3.1.35) and (3.1.53)]. On the other
hand, a general complex training set may not be linearly separable which necessitates the use of a hidden
layer. Thus, more sophisticated learning rules must be used which are usually far less efficient (slower) than
LMS or delta rules and, as in the case of backprop, may not always lead to satisfactory solutions. Ideally,
one would like to "decouple" the training of a multiple-layer network into the training of two (or more)
single layer networks. This could be done if some method of finding an appropriate set of hidden unit
activations could be found. These hidden unit activations will be called hidden targets because they can be
used as target vectors to train the first layer. This approach would be useful if it could be guaranteed that the
mapping from the input to the hidden targets and that from those hidden targets to the desired output targets
are linearly separable. Now, once those hidden targets are found, efficient learning of the weights can
proceed independently for the hidden and output layers using the delta rule. In fact, backprop may be
thought of as employing a dynamic version of the above method where hidden targets are estimated
according to Equation (5.1.10) for each training pair. However, because of its gradient descent nature
backprop's estimate of the hidden targets does not guarantee finding optimal hidden target configuration.
The following is a more efficient method for training multilayer feedforward neural nets, which utilizes the
above hidden target-based supervised learning strategy. Here, a GA is used to "evolve" proper hidden
targets, and a gradient descent search (LMS or delta rule) is used to learn optimal network interconnection
weights (Song, 1992; Hassoun and Song, 1993 a, b).

8.6.1 Hybrid GA/Gradient Descent Method for Feedforward Multilayer Net Training

The basics of the hybrid GA/gradient descent (GA/GD) learning method for a multilayer feedforward net
are described next. The GA/GD method consists of two parts: genetic search in hidden target space and
gradient-based weight update at each layer. Consider the fully interconnected feedforward single hidden
layer net of Figure 8.6.1 with LTG hidden units. The output units can be linear units, sigmoid units, or
LTG's, depending on the nature of the target vector d. Also, consider the training set {xk, dk},
k = 1, 2, ..., m.

Figure 8.6.1. A two-layer fully interconnected feedforward neural network.

Now, if we are given a set of hidden target column vectors {h1, h2, ..., hm}, hj {0, 1}J (or {−1, +1}J), such
that the mappings xk hk and hk dk, for k = 1, 2, ..., m, are linearly separable, then gradient descent-based
search (e.g., perceptron rule, LMS, delta-rule, or Ho-Kashyap rule) may be employed to independently and
quickly learn the optimal weights for both hidden and output layers. Initially, though, we do not know the
proper set {hk} of hidden targets which solves the problem. Therefore, a GA will be used to evolve such a
set of hidden targets. In other words, a GA search is used to explore the space of possible hidden targets
{h} (hidden target space) and converge to a global solution which renders the mappings x h and h d
linearly separable. Since the hidden targets are binary-valued, a natural coding of a set of hidden targets is
the string s = (s1 s2 ... sm) where si is a string formed from the bits of the vector hi. Equivalently, we may
represent the search "point" as a J m array (matrix) H = [h1 h2 ... hm]. This representation is particularly
useful in the multipoint crossover described next.

A population of M random binary arrays {Hj}, j = 1, 2, ..., M, is generated as the initial population of search
"points." Each array has an associated network labeled j whose architecture is shown in Figure 8.6.1, with
all M nets initialized with the same set of random weights. The fitness of the jth search point (array Hj) is
determined by the output SSE of network j:

(8.6.1)

Here, is the output of the lth output unit in network j due to the input xk. Now, any one of a number of
fitness functions may be used. Examples are f(Hj) = −Ej, , or even where is a very small positive number.
Though, different fitness functions may lead to different performance.

Initially, starting from random weight values and random hidden targets, LMS is used to adapt the weights
of the hidden layer in each of the M networks subject to the training set {xk, hk}, k = 1, 2, ..., m. Here, the
threshold activation is removed during weight adaptation and the hidden units are treated as linear units.
Alternatively, an adaptive version of the Ho-Kashyap algorithm or the perceptron rule may be employed
directly to the hidden LTG's. Similarly, the weights of the output layer units are adapted subject to the
training set {hk, dk}, independent of the first hidden layer.

After the weights are updated, each network is tested by performing feedforward computations and its
fitness is computed. In these computations, the outputs of the hidden units (as opposed to the hidden
targets) serve as the inputs to the output layer. Next, the GA operators of reproduction, cross-over, and
mutation are applied to evolve the next generation of hidden target sets {Hj}. In reproduction, the hidden
target sets Hj with the highest fitness are duplicated and are entered into a temporary pool for cross-over.
Cross-over is applied with a probability Pc (Pc is set close to 1). A pair {Hi, Hj} is selected randomly
without replacement from the temporary pool just generated. If a training pair {xk, dk} is poorly learned by
network i (network j) during the above learning phase (i.e., if the output error due to this pair is
substantially larger than the average output error of network i (network j) on the whole training set) then the
corresponding column hk of Hi is replaced by the kth column of Hj. Here, cross-over can affect multiple
pairs of columns in the hidden target arrays. The above reproduction and cross-over operations differ from
those employed by the standard GA, and are motivated by empirical results (Hassoun and Song, 1993b). On
the other hand, the standard mutation operation is used here, where each bit of the Hi's, after cross-over, is
flipped with a probability Pm (typically, Pm = 0.01 is used).

The above is a description of a single cycle of the GA/GD learning method. This cycle is repeated until the
population {Hi} converges to a dominant representation or until at least one network is generated which has
an output SSE less than a prespecified value. During GA/GD learning, the weights of all M networks are
reinitialized at the beginning of each cycle to small random values (one set of random weights may be used
for all networks and for all cycles). Hassoun and Song (1993b) reported several variations of the above
method including the use of sigmoidal hidden units, using the outputs of the hidden layer instead of the
hidden targets to serve as the input pattern to the output layer during the training phase, and using different
fitness functions. Though, the present GA/GD method showed better overall performance on a range of
benchmark problems.

One of the main motivations behind applying GA to the hidden target space as opposed to the weight space
is the possibility of the existence of a more dense set of solutions for a given problem. That is, there may be
many more optimal hidden target sets {H*} in hidden target space which produce zero SSE error than
optimal weights {} in weight space. This hypothesis was validated on a number of simple problems which
are designed so that the weight space and the hidden target space had the same dimensions. However,
further and more extensive testing is still required in this area.

In the architecture of Figure 8.6.1, the above GA/GD method involves a GA search space of dimension mJ.
On the other hand, the GA search in weight space involves a search space of dimension [(n+1)J + (J+1)L]b
where b is the number of binary bits chosen to encode each weight in the network (see Problem 8.5.6).
Since one would normally choose a population size M proportional to the dimension of the binary search
space in GA applications, we may conclude that the GA/GD method has a speed advantage over the other
method when the following condition is satisfied

mJ < [(n+1)J + (J+1)L]b (8.6.2)

Equation (8.6.2) implies that the GA/GD method is preferable over GA-based weight search in neural
network learning tasks when the size of the training set, m, is small compared to the product of the
dimension of the training patterns and the bit accuracy, nb (here, it is assumed that n >> L). Unfortunately,
many practical problems (such as pattern recognition, system identification, and function approximation
problems) lead to training sets characterized by m >> n, which makes the GA/GD method less
advantageous in term of computational speed. However, one may alleviate this problem (e.g., in pattern
recognition applications) by partial preprocessing of the training set using a fast clustering method which
would substantially reduce the size of the training set (refer to Section 6.1 for details) and thus make the
GA/GD method regain its speed advantage.

As the case with the simulated annealing global search method in the weight space, the GA/GD method
may not compete with backprop in computational speed. However, the GA/GD method is an effective
alternative to backprop in learning tasks which involve complex multimodal criterion (error) functions, if
optimal solutions are at a premium.

8.6.2 Simulations

The GA/GD method is tested on the 4-bit parity binary mapping and a continuous mapping that arises from
a nonlinear system identification problem. The 4-bit parity (refer to the K-map in Figure 2.1.2) is chosen
because it is known to pose a difficult problem to neural networks using gradient descent-based learning,
due to multiple local minima. On the other hand, the nonlinear system identification problem is chosen to
test the ability of the GA/GD method with binary hidden targets to approximate continuous nonlinear
functions.

In both simulations, a two-layer feedforward net was used with sigmoidal hidden units employing the
hyperbolic tangent activation. For the binary mapping problem, bipolar training data, bipolar hidden targets,
and bipolar output targets are assumed. Also, a single sigmoidal unit is used for the output layer. On the
other hand, the system identification problem used one linear output unit. The delta rule with a learning rate
of 0.1 is used to learn the weights at both hidden and output layers. Only ten delta rule learning steps are
allowed for each layer per full GA/GD training cycle.

The 4-bit parity is a binary mapping from 4-dimiensional binary-valued input vectors to one binary-valued
(desired) output. The desired output is taken as +1 if the number of +1 bits in the input vector is odd, and −1
otherwise. The networks used in the following simulations employ four hidden units. The GA/GD method
is tested with population sizes of 8, 32, and 64 strings. For each population size, fifty trials are performed
(each trial re-randomizes all initial weights and hidden target sets) and learning cycles statistics (mean
value, standard deviation, maximum, and minimum) are computed. Simulation results are reported in Table
8.6.1 for the GA/GD method and three other methods: (1) a method similar to GA/GD but with the GA
process replaced by a process where the search is reinitialized with random hidden targets and random
weights at the onset of every learning cycle. This method is referred to as the random hidden target/gradient
descent (RH/GD) method (it should be noted here that sufficient iterations of the delta rule are allowed for
each cycle in order to rule out non-convergence). (2) Incremental backprop (BP). And, (3) standard GA
learning in weight space (SGA).

Population Size
8 32 64
Learning
Standard Standard Standard
cycles Mean
deviation
Max Min Mean
deviation
Max Min Mean
deviation
Max Min

statistics
GA/GD 437 530 1882 14 159 195 871 8 105 335 2401 5

RH/GD Does not converge within 1 million trials

Learning 96 out of 100 runs do not converge within 1 Million


Method BP backprop cycles. The remaining 4 runs converged in an
average of 3644 cycles.

Does not converge within 1000 generations with population


SGA
sizes of 64 and 132.

Table 8.6.1. Simulation results for the 4-bit parity using the GA/gradient descent
(GA/GD) method, the random hidden target/gradient descent (RH/GD) method, backprop
(BP), and the standard GA (SGA) in weight space method. A four hidden unit
feedforward neural net is used.

In all simulations, the GA/GD method led to successful convergence, with population
sizes of 64, 32, and 8, in an average of a few hundred learning cycles or less. The RH/GD
method could not find a single solution with 106 trials. This shows clearly the difficulty of the task
and verifies the effectiveness of the GA/GD method. As for backprop, only four out of 100 runs resulted in
solution with the remaining 96 solutions reaching a high error plateau and/or a local minima. Finally, the
SGA method neither converged nor was it able to find a solution within 1000 generations, with population
sizes of 64 and 132; this being the case for codings of 8 and 16 binary bits for each weight.

The second simulation involves the identification of a nonlinear plant described by the discrete-time
dynamics of Equation (5.4.4). A feedforward neural network with 20 hidden units is used and is trained
using the GA/GD method on a training set which is generated according to a random input signal in a
similar fashion as described in Section 5.4.1. The size of the training set used here, though, is m = 100.
Figure 8.6.2 shows a typical simulation result after 20 learning cycles with a population of 50 hidden target
sets. The test input signal in this case is the one given in Equation (5.4.5). This result compares favorably to
those in Figure 5.4.4 (a) and (b) for a two-hidden layer feedforward net and a single hidden layer
feedforward net, respectively, which required on the order of 105 to 106 training iterations of incremental
backprop.

Figure 8.6.2. Nonlinear system identification results (dotted line) employing a single-hidden layer
feedforward neural net trained with the GA/GD method. The exact dynamics are given by the solid line.

We conclude this chapter by pointing out that genetic algorithms may also be used as the evolutionary
mechanism in the context of more general learning systems. Holland (1986; also, Holland and Reitman,
1978) introduced a classifier system which is an adaptive parallel rule-based system that learns syntactically
simple string rules (called classifiers) to guide its performance in an arbitrary environment. The classifier
system develops a sequence(s) of actions or decisions so that a particular objective is achieved in a
dynamically evolving environment. As an example, one may think of the classifier system as a controller
whose objective is to regulate or control the state of a dynamical system. Here, the fitness of a particular
classifier (rule) is a function of how well an individual classifier complements others in the population. The
heart of the classifier system is a reinforcement-type learning mechanism assisted with GA exploration (see
Goldberg (1989) for an accessible reference on classifier systems and their applications). Recently, Abu
Zitar (1993; also, Abu Zitar and Hassoun, 1993a, b) have developed a framework for synthesizing
multilayer feedforward neural net controllers for robust nonlinear control from binary string rules generated
by a classifier-like system.

8.7 Summary

This chapter discusses probabilistic global search methods which are suited for neural network
optimization. Global search methods, as opposed to deterministic gradient-based search methods, must be
used in optimization problems where reaching the global minimum (or maximum) is at a premium.
However, the price one pays for using global search methods is increased computational and/or storage
requirements as compared to that of local search. The intrinsic slowness of global search methods is mainly
due to the slow but crucial exploration mechanisms employed.

Global search methods may be used as optimal learning algorithms for neural networks. Some global search
methods may also be mapped onto recurrent neural networks such that the retrieval dynamics of these
networks escape local minima and evolve towards global minimum.

Two major probabilistic global search methods are covered in this chapter. The first method is stochastic
simulated annealing which is motivated by statistical mechanics, and the second method is genetic
algorithms which is motivated by the mechanics of natural selection and natural genetics. The exploration
mechanism in simulated annealing is governed by the Boltzmann-Gibbs probability distribution, and its
convergence is determined by a "cooling" schedule of slowly decreasing "temperature." This method is
especially appealing since it can be naturally implemented by a stochastic neural network known as the
Boltzmann machine. The Boltzmann machine is a stochastic version of Hopfield's energy minimizing net
which is capable of almost guaranteed convergence to the global minimum of an arbitrary bounded
quadratic energy function. Simulated annealing is also applied to optimal weight learning in generally
interconnected multilayer Boltzmann machines, thus extending the applicability of the Boltzmann machine
from combinatorial optimization to optimal supervised learning of complex binary mappings. However,
these desirable features of Boltzmann machines come with slow learning and/or retrieval.

Mean-field annealing is a deterministic approximation (based on mean-field theory) to stochastic simulated


annealing where the mean behavior of the stochastic state transitions are used to characterize the Boltzmann
machine. This approximation is found to preserve the optimal characteristics of the Boltzmann machine but
with one to two orders of magnitude speed advantage. Mean-field annealing is applied in the context of
retrieval dynamics and weights learning in a Boltzmann machine. It is interesting to see that applying mean-
field theory to a single-layer Boltzmann machine leads to the deterministic continuous Hopfield net.

Genetic algorithms (GA's) are introduced as another method for optimal neural network design. They
employ a parallel multipoint probabilistic search strategy which is biased towards reinforcing search points
of high fitness. The most distinguishing feature of GA's is their flexibility and applicability to a wide range
of optimization problems. In the domain of neural networks, GA's are useful as global search methods for
synthesizing the weights of generally interconnected networks, optimal network architectures and learning
parameters, and optimal learning rules.

It is argued that, in order to make global search methods more speed efficient, local gradient information (if
available) could be used advantageously. In the context of GA optimization, it is possible to think of the GA
as an evolutionary mechanism which could be accelerated by simple learning processes. This observation
motivates the hybrid GA/gradient descent method for feedforward multilayer net training introduced at the
end of the chapter.

Problems
† 8.1.1 Plot and find analytically the global minima of the following functions:

a. y(x) = x sin; x [0.05, 0.5]

b. y(x) = (x2 + 2x) cos(x); |x| < 5

c. y(x) = 10 sin2x + 0.2 (x + 3)2; |x| < 20

d. y(x) = x6 − 15x4 + 27x2 + 250; |x| < 3.7

e. ; |x1| < 1.5 and |x2| < 1.5

f. ; |x1| < 2 and |x2| < 2

† 8.1.2 Plot and count the number of minima, maxima, and saddle points for the following functions:

a. ; |x1| 1.25 and |x2| 1.5

b. ; |x1| 0.3 and |x2| 0.5

c. ; |x1| 0.12 and −2 x2 0.8

† 8.1.3 Employ the gradient descent search rule given by

; t = 0, 1, 2, ...

to find a minimum of the function y(x) = x sin starting from the initial search point: a. x(0) = 0.05;
b. x(0) = 0.1; c. x(0) = 0.15. Assume = 10−4.

† 8.1.4 Repeat Problem 8.1.3 for the function in Problem 8.1.1 (c) with x(0) = −20 and = 0.01.

†8.1.5 Employ the gradient descent/ascent global search strategy described in Section 8.1 to find the global
minima of the functions (a) - (d) in Problem 8.1.1.

† 8.1.6 "Global descent" is a global search method which was discussed in Section 5.1.2. Implement the
global descent method to search for the global minimum of the function y(x) given in Problem 8.1.1 (c).
Assume , = 0.005, x = 0.01, = 2, and experiment with different values of the repeller parameter . Plot
y[x(t)] versus t for various values of k.

†8.1.7 For the uni-variate case, y = y(x), the global minimum can be reached by gradient descent on the
noisy function

where N(t) is a "noise signal" and c(t) is a parameter which controls the magnitude of noise. Apply the
gradient descent rule (with = 0.01) of Problem 8.1.3 to . Use the resulting "stochastic" gradient rule to find
the global minimum of y(x) in Problem 8.1.1 (c). Assume that N(t) is a normally distributed sequence with a
mean of zero and a variance of one, and that c(t) = 150 exp(−t). Start from x(0) = −20 and experiment with
different values of , [0.01, 0.001]. Plot x(t) versus t (t = 0, 1, 2, ...). What range of values of are likely to
lead to the global minimum of y(x)?

†8.2.1 Use the simulated annealing algorithm described in Section 8.2 with the temperature schedule in
Equation (8.2.5) to find the global minima of the functions in Problem 8.1.1 (a) - (c) and Problem 8.1.2 (a)
and (b). Make intelligent choices for the variance of the random perturbation x taking into account the
domain of the function being optimized. Also, make use of the plots of these functions to estimate the
largest possible change in y due to x, , and use this information to guide your estimate of To.

*8.3.1 Consider the simple model of a biological neuron with output x given by x(net) = sgn(net), where
net is the post-synaptic potential. Experimental observations (Katz, 1966) show that this post-synaptic
potential is normally distributed; i.e.,

where is the mean potential. The distribution width, , is determined by the parameters of the noise sources
associated with synaptic junctions. Show that the probability that the neuron fires (i.e., the probability of its
output to be equal to one) is given by

where . Note how the pseudo-temperature now has the physical interpretation as being proportional to the
fluctuation of the post-synaptic potential of a real neuron. Next show that the above probability can be
roughly approximated by . Hint: Compare the series expansion of to that of tanh(x).

8.3.2 Derive Equation (8.3.4). (Hint: employ the thermal equilibrium condition of Equation (8.2.3) and
Equation (8.3.1), assume that all units have equal probability of being selected for updating and that only
one unit updates its state at a given time). Show that according to Equation (8.3.4) the probability of a
transition (bit-flip) that increases the energy E is always less than 0.5.

8.3.3 Show that the relative-entropy H in Equation (8.3.9) is positive or zero.

8.3.4 Derive Equation (8.3.12) starting from Equation (8.3.8).

* 8.3.5 Derive Equation (8.3.16) by performing a gradient descent on H in Equation (8.3.15).

8.5.1 Employ Hardy's theorem (see Equation (8.5.2) and associated discussion) in a simple iterative
procedure to find the largest number in the set Q = {1, 3, 4.5, 1.5, 4.2, 2}.

8.5.2 Consider the ten strings in population S(0) in Table 8.5.1 and the two schemata H1 = ∗11∗∗∗ and
H2 = ∗01∗∗0. Which schemata are matched by which strings in the population S(0)? What are the order and
defining length of each of H1 and H2?

8.5.3 Consider the problem of finding the global minimum of the function , x [0.05, 0.5], using GA.
Assume the initial population S(0) as in Table 8.5.1 and let Pc = 0.8 and Pm = 0.01 as in the first simulation
of Example 8.5.2. Use Equation (8.5.4) to compute a lower bound for the expected number of schemata of
the form ∗11∗∗∗ in the generation at t = 1. Repeat using the approximation of Equation (8.5.5). Next,
compare these two bounds to the actual number of schemata of the form ∗11∗∗∗ in population S(1) in Table
8.5.1.

8.5.4 Repeat Problem 8.5.3 with the schema ∗01∗∗0.

† 8.5.5 Find the global minimum of the functions in Problem 8.1.1 (a) - (c) and Problem 8.1.2 (a) and (b)
using the standard genetic algorithm of Section 8.5.1. Use binary strings of dimension n = 8. Assume
Pc = 0.85, Pm = 0.02, and a uniformly distributed initial population of 10 strings. Compare your results to
those in Problem 8.2.1.

8.5.6 Consider the two-layer feedforward net shown in Figure 8.6.1. Assume that a binary coding of
weights is used where each weight is represented by a b-bit substring for the purpose of representing the
network as a GA string s of contiguous weight substrings. Show that the dimension of s is equal to
[(n + 1)J + (J + 1)L]b, where n, J, and L are the input vector dimension, number of hidden units, and
number of output units, respectively.
† 8.5.7 Use the standard genetic algorithm to find integer weights in the range [−15, +15] for the neural
network in Figure P8.5.7 such that the network solves the XOR problem. Assume a signed-binary coding of
5 bits (sign bit plus four magnitude bits) for each weight. Also, assume Pc = 0.85, Pm = 0.01, and
experiment with population sizes of M = 10, 20, 30, 40, and 50. The total number of correct responses may
be used as the fitness function.

Figure P8.5.7
Matlab Math Review Modules
These are MS Word file 6.0 or higher and are linked to Matlab (Verified for Matlab 4). If
you don't have Matlab then it just opens as a Word document.If you are on a UNIX
machine than these documents can be downloaded and transferred to a PC.
Review of Engineering Mathematical Tools

• Table of Contents
• Calculus
Contents:
I Functions: Graphs, Extreme Points and Roots
II Differentiation
III Integration
IV Series
• Complex Numbers
Contents:
I What are Complex Numbers and why study them?
II Complex Algebra
III Polar Form
IV Exponential Form
V Trigonometric & Hyperbolic functions and Logarithms
• Linear Algebra
Contents:
I Vectors & Matrices
II Matrix Algebra
III Matrix Determinant and Inverse
IV Solution of Linear Systems of Equations
V Simple Programs: Creating and Plotting Arrays
• Differential Equations
Contents:
I Solutions of Ordinary Differential Equations
II Solutions of Simple Nonlinear Differential Equations
III Numerical Solutions
• Taylor Series
• Miscellaneous Commands
These are MS Word files (v.6.0) and are linked to Matlab. If you don't have Matlab then it
just opens as a Word document.

• Shifted and Scaled Activation Functions (Chapter 5)


• Hyperbolic Tangent & Logistic Activation Functions and its derivative (Chapter
5)
• Two-dimensional Dynamical Systems (Chapter 7)

Shifted and Scaled Activation Functions

This matlab program plots a shifted and scaled hyperbolic tangent


activation function and its derivative. As we increase beta, we can find that the curves are
sharper.
alpha is the scaling factor and theta is the shifting factor.

The hyperbolic tangent activation function is a bipolar function .


clear
close all
beta =1;
alpha =2;
theta =10;

net = linspace(theta-5,theta+5,100);

f =alpha*tanh(beta*net -theta);

for j =1:1:size(net,2)
fprime(j) = alpha*beta *(1-f(j)*f(j));
end

plot(net,f)
hold on
plot(net,fprime,'b:')
grid on
%axis([-4,4,-1.5,1.5])
title('Hyperbolic tangent activation f and its derivative fprime')
xlabel('net')

Hyperbolic tangent activation f and its derivative fprime


2

-1

-2

-3

-4

-5

-6
5 10 15
net

Red - f .
Blue -fprime.

The following Matlab program plots the logistic activation function and its derivative.
The logistic activation function is a unipolar function, i.e. its value ranges from 0 to a
positive value.

theta2 =10;
alpha2=1;
beta2 =2;
net2 = linspace(-10,10,200);
for j =1:1:size(net2,2)
f2(j) =alpha2*1/(1+exp(-(beta2*net2(1,j)-theta2)));
f2prime(j) =alpha2*beta2*f2(j)*(1-f2(j));
end

close all

plot(net2,f2)

hold on
plot(net2,f2prime,'b:')
grid on
axis([-10,10,-alpha2,alpha2]);
title('logistic function and its derivative')
xlabel('net');

logistic function and its derivative


1

0.5

-0.5

-1
-10 -5 0 5 10
net

Red - f.
Blue-fprime.

2nd EXAMPLE

This matlab program plots the hyperbolic tangent activation function and
its derivative.

clear
close all
beta =1;

net = linspace(-4,4,100);

f =tanh(beta*net );

for j =1:1:size(net,2)
fprime(j) = beta *(1-f(j)*f(j));
end

plot(net,f)
hold on
plot(net,fprime,'r:')
grid on
axis([-4,4,-1.5,1.5])
title('Hyperbolic tangent activation f and its derivative fprime')
xlabel('net')

Hyperbolic tangent activation f and its derivative fprime


1.5

0.5

-0.5

-1

-1.5
-4 -2 0 2 4
net

Following Matlab program plots the logistic activation function and its derivative.

beta =4;
net = linspace(-3,3,100);
for j =1:1:size(net,2)
f2(j) =1/(1+exp(-(beta*net(1,j))));
f2prime(j) =beta*f2(j)*(1-f2(j));
end

close all

plot(net,f2)

hold on
plot(net,f2prime,'r:')
grid on
axis([-4,4,-1.5,1.5]);
title('logistic function and its derivative')
xlabel('net');
logistic function and its derivative
1.5

0.5

-0.5

-1

-1.5
-4 -2 0 2 4
net

2 Dimensional Dynamical Systems

Problem 7.5.7
page 415

Given system :-
.
x1= x2; ----------------- (1)
.
x2= -2x1-3x2; ----------------- (2)

Euler's Method:
According to Euler's method
.
x = x(t + del t) -x(t)
------------------- --------------- (3)
del t

Substituiting (3) in (1) and (2),


we can find the value of x1(t + del t) and x2(t+ del t) as a function of x1(t) and x2(t).
To start with , we assume some initial conditions, i.e., values of x1 and x2 at time t =0.

For the above problem , let the initial condition be x1=0 and x2=0 at time t=0.
Followinf Matlab program evalutes the the values of x1 and x2 from the initial state
to 3000 iterations.

It also plots the graph of x1 vs x2, for 3 different initial states.


clear
close all
x11(1)=1;
x12(1)=0.5;
x22(1)=1;
x21(1) =1;
x13(1)=0.75;
x23(1)=1;
delta =0.01;

for j =2:1:3000
x11(j) =x21(j-1)*delta +x11(j-1);
x21(j) =x21(j-1)+delta*(-2*x11(j-1)-3*x21(j-1));
x12(j) =x22(j-1)*delta +x12(j-1);
x22(j) =x22(j-1)+delta*(-2*x12(j-1)-3*x22(j-1));
x13(j) =x23(j-1)*delta +x13(j-1);
x23(j) =x23(j-1)+delta*(-2*x13(j-1)-3*x23(j-1));
end

plot(x11,x21,'r')
hold on
plot(x12,x22,'g')
hold on
plot(x13,x23,'y')
xlabel('x1')
ylabel('x2')

0.8

0.6

0.4

0.2
x2

-0.2

-0.4

-0.6
0 0.5 1
x1

Red - initial state of (1,1).


Yellow - initial state of (0.75,1)
Green - initial state of (0.5,1)
(b)

page 415,prob-7.5.4
Given system :-
.
x1 = x2 -x1(x1^2+x2^2)
.
x2 = -x1 -x2(x1^2+x2^2)

Euler's method

x1(t+del t) =x2*del t - del t*x1*(x1^2+x2^2) + x1(t);


x2(t+del t) =-x1*del t - el t*x2*(x1^2+x2^2) + x2(t);

clear
close all
x11(1)=1;
x21(1) =1;
x12(1) =0.01;
x22(1)=0.01;
x13(1) =0.5;
x23(1) =1;
delta =0.01;

for j =2:1:5000
x11(j) =x21(j-1)*delta -delta*x11(j-1)*(x11(j-1)*x11(j-1) + x21(j-
1)*x21(j-1)) +x11(j-1);
x21(j) = -x11(j-1)*delta -delta*x21(j-1)*(x11(j-1)*x11(j-1) + x21(j-
1)*x21(j-1)) + x21(j-1);
x12(j) =x22(j-1)*delta -delta*x12(j-1)*(x12(j-1)*x12(j-1) + x22(j-
1)*x22(j-1)) +x12(j-1);
x22(j) = -x12(j-1)*delta -delta*x22(j-1)*(x12(j-1)*x12(j-1) + x22(j-
1)*x22(j-1)) + x22(j-1);
x13(j) =x23(j-1)*delta -delta*x13(j-1)*(x13(j-1)*x13(j-1) + x23(j-
1)*x23(j-1)) +x13(j-1);
x23(j) = -x13(j-1)*delta -delta*x23(j-1)*(x13(j-1)*x13(j-1) + x23(j-
1)*x23(j-1)) + x23(j-1);
end

plot(x11,x21,'r')
hold on
plot(x12,x22,'g')
hold on
plot(x13,x23,'b')
xlabel('x1')
ylabel('x2')
1

0.5
x2

-0.5
-0.5 0 0.5 1
x1

Red - initial state of (1,1)


Blue - initial state of (0.5,1)
Green - initial state of (0.1,0.1)

(c)

page 415 , prob- 7.5.5


Given System :-
.
x1 = x2;
.

x2 = -x2 -(2+sin t)x1;


clear
close all
x11(1) =1;
x21(1)=1;
x12(1) =0.5;
x22(1) =1;
x13(1) =0.75;
x23(1) =1;
delta =0.001;

for j =2:1:8000
x11(j) = x21(j-1)*delta + x11(j-1);
x21(j) = -x21(j-1)*delta - delta*(2+sin(j-1))*x11(j-1) + x21(j-1);
x12(j) = x22(j-1)*delta + x12(j-1);
x22(j) = -x22(j-1)*delta - delta*(2+sin(j-1))*x12(j-1) + x22(j-1);
x13(j) = x23(j-1)*delta + x13(j-1);
x23(j) = -x23(j-1)*delta - delta*(2+sin(j-1))*x13(j-1) + x23(j-1);
end
plot(x11,x21,'r')
hold on
plot(x12,x22,'g')
hold on
plot(x13,x23,'y')
xlabel('x1')
ylabel('x2')

0.5

0
x2

-0.5

-1

-1.5
-0.5 0 0.5 1 1.5
x1

Red - initial state is (1,1)


Yellow - initial state is (0.75,1)
Green - initial state is (0.5,1)

(d)

page 415, prob-7.5.6

Damped pendulum system :-


.
x1 = x2;
.
x2 = -(g/l)*sin x1 -f(x2);
Function f should be continuous and x2*f(x2) >=0;

clear
close all
x11(1) =1;
x21(1) =1;
x12(1) =0.5;
x22(1) =1;
delta =0.01;
g =9.8;
l=1;
for j = 2:1:6000
f1=x21(j-1);
f2=x22(j-1);

x11(j) = x21(j-1)*delta +x11(j-1);


x21(j) = (-g/l)*sin(x11(j-1))*delta -delta*f1 +x21(j-1);
x12(j) = x22(j-1)*delta +x12(j-1);
x22(j) = (-g/l)*sin(x12(j-1))*delta -delta*f2 +x22(j-1);
end

plot(x11,x21,'b')
hold on
plot(x12,x22,'m')
xlabel('x1')
ylabel('x2')

0
x2

-1

-2

-3
-1 -0.5 0 0.5 1 1.5
x1

Blue - initial state is (1,1)


Magenta - initial state is (0.5,1)
Java Demonstrations of Neural Net
Concepts
The Computation and Neural Networks Laboratory has developedJava demonstrations of
neural net concepts.

Neural Networks Java Applets


 2 Dimensional Linear Dynamical Systems (CNNL) (Check below for a more
general applet)
 Perceptron Learning Rule (CNNL)
 Principal Component Extraction via Various Hebbian-Type Rules (CNNL)
 Clustering via Simple Competitive Learning (CNNL)
 SOM: Self-Organizing Maps (Supports various methods & data distributiuons;
authored by H.S. Loos & B. Fritzke)
 Traveling Salesman Problem (TSP) via SOM (authored by O. Schlueter)
 Backprop Trained Multilayer Perceptron for Function Approximation (CNNL)
(Demonstrates generalization effects of early stoping of training)
 Neural Nets for Control: The Ball Balancing Problem (CNNL)
 Backprop Trained Multilayer Perceptron Handwriting Recognizer (by Bob
Mitchell)
 Image Compression Using Backprop (CNNL)
 Generalizations of the Hamming Associative Memory (CNNL)
 Associative Memory Analysis Demo (Applet written and maintained by David
Clark; Algorithms were developed in the CNNL)
 Support Vector Machine (SVM) (by Lucent Technologies)
Here is a note on Dr. Hassoun's PhD dissertation which included polynomial-type
"SVM" ....
 The Extreme Point Machine (EPM)

Other Sites with Relevent Java Demos


 Collection of Applets for Neural Networks and Artificial Life
 Genetic Algorithm (GA) Demo (by Marshall C. Ramsey)
 Trailer Truck Backup Demo: Fuzzy Rule_Based Approach (by
Christopher Britton)

Useful Java Applets for Engineers!


 Mathtools.net: The Technical Solutions Portal
 On-Line Graphic Calculator and Solver
 Fourier Series Construction of Waves
 Solution of a 2-D system of autonomous differential equations

Major Java Applet Collections (with search capabilities)


 Digital Cat's Java Resource Center
 Developer.com Java Resource Center
 JARS.com: Rated Java Applets
... but first take a look at Professor Hassoun's
Course: ECE 512

Neural Network-Related Sites Around the World


World Wide Search for Neural Networks Related Sites
Computation and Neural Systems Related Groups in USA
On-Line Neural Network Courses

Data Repositories
Pattern Recognition Related Archives (NIST, UCSD, etc.)
Face Recognition Data (& Research Groups)
Link to UCI Machine Learning Repository
Data for Benchmarking of Learning Algorithms (maintained by David
Rosen)
Data: Spirals, Parity, NetTalk, Vowel, Sonar etc. (from CMU)
DELVE: Data for Evaluating Learning in Valid Experiments (plus learning
methods/software) (by C. Rasmussen and G. Hinton)
References

Aart, E. and Korst, J. (1989). Simulated Annealing and Boltzmann Machines. Wiley, New York.

Abu-Mostafa, Y. S. (1986a). "Neural Networks for Computing?" in Neural Networks for Computing, J. S.
Denker, Editor, 151, 1-6. American Institute of Physics, New York.

Abu-Mostafa, Y. S. (1986b). "Complexity of Random Problems," in Complexity in Information Theory, Y.


Abu-Mostafa, Editor, 115-131. Springer-Verlag, Berlin.

Abu-Mostafa, Y. S. and Psaltis, D. (1987). "Optical Neural Computers," Scientific American, 256(3), 88-95.

Abu Zitar, R. A. (1993). Machine Learning with Rule Extraction by Genetic Assisted Reinforcement
(REGAR): Application to Nonlinear Control. Ph.D. Dissertation, Department of Electrical and Computer
Engineering, Wayne State University, Detroit, Michigan.

Abu Zitar, R. A. and Hassoun, M. H. (1993a). "Neurocontrollers Trained with Rules Extracted by a Genetic
Assisted Reinforcement Learning System," IEEE Transactions on Neural Networks, to appear in 1994.

Abu Zitar, R. A. and Hassoun, M. H. (1993b). "Regulator Control via Genetic Search Assisted
Reinforcement," in Proceedings of the Fifth International Conference on Genetic Algorithms (Urbana-
Champaign 1993), S. Forrest, Editor, 254-262. Morgan Kaufmann, San Mateo.

Ackley, D. H. and Littman, M. S. (1990). "Generalization and Scaling in Reinforcement Learning," in


Advances in Neural Information Processing II (Denver 1989), D. S. Touretzky, Editor, 550-557. Morgan
Kaufmann, San Mateo.

Ackley, D. H., Hinton, G. E., and Sejnowski, T. J. (1985). "A Learning Algorithm for Boltzmann
Machines," Cognitive Science, 9, 147-169.

Alander, J. T. (1992). "On Optimal Population Size of Genetic Algorithms," Proceedings of CompEuro 92
(The Hague, Netherlands 1992), 65-70. IEEE Computer Society Press, New York.

Albert, A. (1972). Regression and the Moore-Penrose Pseudoinverse. Academic Press, New York, NY.

Albus, J. S. (1971). "A Theory of Cerebellar Functions," Mathematical Biosciences, 10, 25-61.

Albus, J. S. (1975). "A New Approach to Manipulator Control: The Cerebellar Model Articulation
Controller (CMAC)," Journal of Dynamic Systems Measurement and Control, Transactions of the ASME,
97, 220-227.

Albus, J. S. (1979). "A Model of the Brain for Robot Control, Part 2: A Neurological Model," BYTE, 54-95.

Albus, J. S. (1981). Brains, Behavior, and Robotics. BYTE/McGraw-Hill, Peterborough.

Alkon, D. L., Blackwell, K. T., Vogl, T. P., and Werness, S. A. (1993). "Biological Plausibility of Artificial
Neural Networks: Learning by Non-Hebbian Synapses," in Associative Neural Memories: Theory and
Implementation, M. H. Hassoun, Editor, 31-49. Oxford University Press, New York.

Almeida, L. B. (1987). "A Learning Rule for Asynchronous Perceptrons with Feedback in a Combinatorial
Environment," in IEEE First International Conference on Neural Networks (San Diego 1987), M. Caudill
and C. Butler, Editors, vol. II, 609-618. IEEE, New York.

Almeida, L. B. (1988). "Backpropagation in Perceptrons with Feedback," in Neural Computers (Neuss


1987), R. Eckmiller and C. von der Malsburg, Editors, 199-208. Springer-Verlag, Berlin.
Alspector, J. and Allen, B. B. (1987). "A Neuromorphic VLSI Learning System," in Advanced Research in
VLSI: Proceedings of the 1987 Stanford Conference, P. Losleben, Editor, 313-349. MIT, Cambridge.

Aluffi-Pentini, F., Parisi, V., and Zirilli, F. (1985). "Global Optimization and Stochastic Differential
Equations," Journal of Optimization Theory and Applications, 47(1), 1-16.

Amari, S.-I. (1967). "Theory of Adaptive Pattern Classifiers," IEEE Trans. Electronic Computers, EC-16,
299-307.

Amari, S.-I. (1968). Geometrical Theory of Information. In Japanese. Kyoritsu-Shuppan, Tokyo.

Amari, S.-I. (1971). "Characteristics of Randomly Connected Threshold-Element Networks and Network
Systems," IEEE Proc., 59(1), 35-47.

Amari, S.-I. (1972a). "Learning Patterns and Pattern Sequences by Self-Organizing Nets of Threshold
Elements," IEEE Trans. Computers, C-21, 1197-1206.

Amari, S.-I. (1972b). "Characteristics of Random Nets of Analog Neuron-Like Elements," IEEE
Transactions on Systems, Man, and Cybernetics, SMC-2(5), 643-657.

Amari, S.-I. (1974). "A Method of Statistical Neurodynamics," Kybernetik, 14, 201-215.

Amari, S.-I. (1977a). "Neural Theory of Association and Concept-Formation," Biological Cybernetics, 26,
175-185.

Amari, S.-I. (1977b). "Dynamics of Pattern Formation in Lateral-Inhibition Type Neural Fields," Biological
Cybernetics, 27, 77-87.

Amari, S.-I. (1980). "Topographic Organization of Nerve Fields," Bull. of Math. Biology, 42, 339-364.

Amari, S.-I. (1983). "Field Theory of Self-Organizing Neural Nets," IEEE Trans. Syst., Man, Cybernetics,
SMC-13, 741-748.

Amari, S.-I. (1989). "Characteristics of Sparsely Encoded Associative Memory," Neural Networks, 2(6),
451-457.

Amari, S.-I. (1990). "Mathematical Foundations of Neurocomputing," Proceedings of the IEEE, 78(9),
1443-1463.

Amari, S.-I. (1993). "A Universal Theorem on Learning Curves," Neural Networks, 6(2), 161-166.

Amari, S.-I., Fujita, N., and Shinomoto, S. (1992). "Four Types of Learning Curves," Neural Computation,
4(2), 605-618.

Amari, S.-I. and Maginu, K. (1988). "Statistical Neurodynamics of Associative Memory," Neural Networks,
1(1), 63-73.

Amari, S.-I. and Murata, N. (1993). "Statistical Theory of Learning Curves Under Entropic Loss Criterion,"
Neural Computation, 5(1), 140-153.

Amari, S.-I. and Yanai, H.-F. (1993). "Statistical Neurodynamics of Various Types of Associative Nets," in
Associative Neural Memories: Theory and Implementation, M. H. Hassoun, Editor, 169-183. Oxford
University Press, New York.

Amit, D. J. (1989). Modeling Brain Function: The World of Attractor Neural Networks. Cambridge
University Press, Cambridge.
Amit, D. J., Gutfreund, H., and Sompolinsky, H. (1985). "Storing Infinite Numbers of Patterns in a Spin-
Glass Model of Neural Networks," Physical Review Lett., 55(14), 1530-1533.

Amit, D. J., Gutfreund, H., and Sompolinsky, H. (1987). "Statistical Mechanics of Neural Networks Near
Saturation," Ann. Phys. N. Y., 173, 30-67.

Anderberg, M. R. (1973). Cluster Analysis for Applications. Academic Press, NY.

Anderson, J. A. (1972). "A Simple Neural Network Generating Interactive Memory," Mathematical
Biosciences, 14, 197-220.

Anderson, J. A. (1983). "Neural Models for Cognitive Computations," IEEE Transactions on Systems,
Man, and Cybernetics, SMC-13, 799-815.

Anderson, J. A. (1993). "The BSB Model: A Simple Nonlinear Autoassociative Neural Network," in
Associative Neural Memories: Theory and Implementation, M. H. Hassoun, Editor, 77-103. Oxford
University Press, New York.

Anderson, D. Z. and Erie, M. C. (1987). "Resonator Memories and Optical Novelty Filters," Optical
Engineering, 26, 434-444.

Anderson, J. A., Gately, M. T., Penz, P. A., and Collins, D. R. (1990). "Radar Signal Categorization Using
a Neural Network," Proc. IEEE, 78, 1646-1657.

Anderson, J. A. and Murphy, G. L. (1986). "Psychological Concepts in a Parallel System," Physica, 22-D,
318-336.

Anderson, J. A., Silverstien, J. W., Ritz, S. A., and Jones, R. S. (1977). "Distinctive Features, Categorical
Perception, and Probability Learning: Some Applications of a Neural Model," Psychological Review, 84,
413-451.

Angeniol, B., Vaubois, G. and Le Texier, Y.-Y. (1988). "Self-Organizing Feature Maps and the Traveling
Salesman Problem," Neural Networks, 1(4), 289-293.

Apolloni, B. and De Falco, D. (1991). "Learning by Asymmetric Parallel Boltzmann Machines." Neural
Computation, 3(3), 402-408.

Apostol, T. M. (1957). Mathematical Analysis: A Modern Approach to Advanced Calculus. Addison-


Wesely, Reading, MA.

Bachmann, C. M., Cooper, L. N., Dembo, A., and Zeitouni, O. (1987). "A Relaxation Model for Memory
with High Storage Density," Proc. of the National Academy of Sciences, USA, 84, 7529-7531.

Bäck, T. (1993). "Optimal Mutation Rates in Genetic Search," Proceedings of the Fifth International
Conference on Genetic Algorithms (Urbana-Champaign 1993), S. Forrest, Editor, 2-8. Morgan Kaufmann,
San Mateo.

Baird, B. (1990). "Associative Memory in a Simple Model of Oscillating Cortex," in Advances in Neural
Information Processing Systems 2 (Denver 1989) D. S. Touretzky, Editor, 68-75. Morgan Kaufmann, San
Mateo.

Baird, B. and Eeckman, F. (1993). "A Normal Form Projection Algorithm for Associative Memory," in
Associative Neural Memories: Theory and Implementation, M. H. Hassoun, Editor, 135-166. Oxford
University Press, New York.
Baldi, P. (1991). "Computing with Arrays of Bell-Shaped and Sigmoid Functions," in Neural Information
Processing Systems 3 (Denver 1990), R. P. Lippmann, J. E. Moody, and D. S. Touretzky, Editors, 735-742.
Morgan Kaufmann, San Mateo.

Baldi, P. and Chauvin, Y. (1991). "Temporal Evolution of Generalization during Learning in Linear
Networks," Neural Computation, 3(4), 589-603.

Baldi, P. and Hornik, K. (1989). "Neural Networks and Principal Component Analysis: Learning from
Examples Without Local Minima," Neural Networks, 2(1), 53-58.

Barto, A. G. (1985). "Learning by Statistical Cooperation of Self-Interested Neuron-Like Computing


Elements," Human Neurobiology, 4, 229-256.

Barto, A. G. and Anandan, P. (1985). "Pattern Recognizing Stochastic Learning Automata," IEEE Trans.
on Systems, Man, and Cybernetics, SMC-15, 360-375.

Barto, A. G. and Jordan, M. I. (1987). "Gradient Following without Backpropagation in Layered


Networks," in IEEE First International Conference on Neural Networks (San Diego 1987), M. Caudill and
C. Butler, Editors, vol. II, 629-636. IEEE, New York.

Barto, A. G. and Singh, S. P. (1991). "On The Computational Economics of Reinforcement Learning," in
Connectionist Models: Proceedings of the 1990 Summer School (Pittsburgh 1990), D. S. Touretzky, J. L.
Elman, T. J. Sejnowski, and G. E. Hinton, Editors, 35-44, Morgan Kaufmann, San Mateo.

Barto A. G., Sutton, R. S., and Anderson, C. W. (1983). "Neuronlike Adaptive Elements That Can Solve
Difficult Learning Control Problems," IEEE Trans. System. Man, and Cybernetics, SMC-13(5), 834-846.

Batchelor, B. G. (1969). Learning Machines for Pattern Recognition, Ph.D. thesis, University of
Southampton, Southampton, England.

Batchelor, B. G. (1974). Practical Approach to Pattern Classification. Plenum, New York.

Batchelor, B. G. and Wilkins, B. R. (1968). "Adaptive Discriminant Functions," Pattern Recognition, IEEE
Conf. Publ. 42, 168-178.

Battiti, R. (1992). "First- and Second-Order Methods for Learning: Between Steepest Descent and Newton's
Method," Neural Computation, 4(2), 141-166.

Baum, E. B. (1988). "On the Capabilities of Multilayer Perceptrons," Journal of Complexity, 4, 193-215.

Baum, E. (1989). "A Proposal for More Powerful Learning Algorithms," Neural Computation, 1(2), 201-
207.

Baum, E. and Haussler, D. (1989). "What Size Net Gives Valid Generalization?" Neural Computation, 1(1),
151-160.

Baum, E. B. and Wilczek, F. (1988). "Supervised Learning of Probability Distributions by Neural


Networks," in Neural Information Processing Systems (Denver 1987), D. Z. Anderson, Editor, 52-61,
American Institute of Physics, New York.

Baxt, W. G. (1990). "Use of Artificial Neural Network for Data Analysis in Clinical Decision-Making: The
Diagnosis of Acute Coronary Occlusion," Neural Computation, 2(4), 480-489.

Becker, S. and Le Cun, Y. (1989). "Improving the Convergence of Back-Propagation Learning with Second
Order Methods," in Proceedings of the 1988 Connectionist Models Summer School (Pittsburgh 1988), D.
Touretzky, G. Hinton, and T. Sejnowski, Editors, 29-37. Morgan Kaufmann, San Mateo.
Beckman, F. S. (1964). "The Solution of Linear Equations by the Conjugate Gradient Method," in
Mathematical Methods for Digital Computers, A. Ralston and H. S. Wilf, Editors. Wiley, New York.

Belew, R., McInerney, J., and Schraudolph, N. N. (1990). "Evolving Networks: Using the Genetic
Algorithm with Connectionist Learning," CSE Technical Report CS90-174, University of California, San
Diego.

Benaim, M. (1994). "On Functional Approximation with Normalized Gaussian Units," Neural
Computation, 6(2), 319-333.

Benaim, M. and Tomasini, L. (1992). "Approximating Functions and Predicting Time Series with Multi-
Sigmoidal Basis Functions," in Artificial Neural Networks, J. Aleksander and J. Taylor, Editors, vol. 1, 407-
411. Elsevier Science Publisher B. V., Amsterdam.

Bilbro, G. L., Mann, R., Miller, T. K., Snyder, W. E., van den Bout, D. E., and White, M. (1989).
"Optimization by Mean Field Annealing," in Advances in Neural Information Processing Systems I (Denver
1988), Touretzky, D. S., 91-98. Morgan Kaufmann, San Mateo.

Bilbro, G. L. and Snyder, W. E. (1989). "Range Image Restoration Using Mean Field Annealing," in
Advances in Neural Information Processing Systems I (Denver 1988), Touretzky, D. S., 594-601. Morgan
Kaufmann, San Mateo.

Bilbro, G. L., Snyder, W. E., Garnier, S. J., and Gault, J. W. (1992). "Mean Field Annealing: A Formalism
for Constructing GNC-like Algorithms," IEEE Transactions on Neural Networks, 3(1), 131-138.

Bishop, C. (1991). "Improving the Generalization Properties of Radial Basis Function Neural Networks,"
Neural Computation, 3(4), 579-588.

Bishop, C. (1992). "Exact Calculation of the Hessian Matrix for the Multilayer Perceptron," Neural
Computation, 4(4), 494-501.

Block, H. D. and Levin, S. A. (1970). "On the Boundedness of an Iterative Procedure for Solving a System
of Linear Inequalities," Proc. American Mathematical Society, 26, 229-235.

Blum, A. L. and Rivest, R. (1989). "Training a 3-Node Neural Network is NP-Complete," Proceedings of
the 1988 Workshop on Computational Learning Theory, 9-18, Morgan Kaufmann, San Mateo.

Blum, A. L. and Rivest, R. (1992). "Training a 3-Node Neural Network is NP-Complete," Neural Networks,
5(1), 117-127.

Blumer, A., Ehrenfeucht, A., Haussler, D., and Warmuth, M. (1989). "Learnability and the Vapnik-
Chervonenkis Dimension," JACM, 36(4), 929-965.

Boole, G. (1854). An Investigation of the Laws of Thought. Dover, NY.

Bounds, D. G., Lloyd, P. J., Mathew, B., and Wadell, G. (1988). "A Multilayer Perceptron Network for the
Diagnosis of Low Back Pain," in Proc. IEEE International Conference on Neural Networks (San Diego
1988), vol. II, 481-489

Bourlard, H. and Kamp, Y. (1988). "Auto-Association by Multilayer Perceptrons and Singular Value
Decomposition," Biological Cybernetics, 59, 291-294.

van den Bout, D. E. and Miller, T. K. (1988). "A Traveling Salesman Objective Function that Works," in
IEEE International Conference on Neural Networks (San Diego 1988), vol. II, 299-303. IEEE, New York.
van den Bout, D. E. and Miller, T. K. (1989). "Improving the Performance of the Hopfield-Tank Neural
Network Through Normalization and Annealing," Biological Cybernetics, 62, 129-139.

Bromley, J. and Denker, J. S. (1993). "Improving Rejection Performance on Handwritten Digits by


Training with 'Rubbish'," Neural Computation, 5(3), 367-370.

Broomhead, D. S. and Lowe, D. (1988). "Multivariate Functional Interpolation and Adaptive Networks,"
Complex Systems, 2, 321-355.

Brown, R. R. (1959). "A Generalized Computer Procedure for the Design of Optimum Systems: Parts I and
II," AIEE Transactions, Part I: Communications and Electronics, 78, 285-293.

Brown, R. J. (1964). Adaptive Multiple-Output Threshold Systems and Their Storage Capacities, Ph.D.
Thesis, Tech. Report 6771-1, Stanford Electron. Labs, Stanford University, CA.

Brown, M., Harris, C. J., and Parks, P. (1993). "The Interpolation Capabilities of the Binary CMAC,"
Neural Networks, 6(3), 429-440.

Bryson, A. E. and Denham, W. F. (1962). "A Steepest-Ascent Method for Solving Optimum Programming
Problems," J. Applied Mechanics, 29(2), 247-257.

Bryson, A. E. and Ho, Y.-C. (1969). Applied Optimal Control. Blaisdell, New York.

Burke, L. I. (1991). "Clustering Characterization of Adaptive Resonance," Neural Networks, 4(4), 485-491.

Burshtien, D. (1993). "Nondirect Convergence Analysis of the Hopfield Associative Memory," in Proc.
World Congress on Neural Networks (Portland 1993), vol. II, 224-227. LEA, Hillsdale, NJ.

Butz, A. R. (1967). "Perceptron Type Learning Algorithms in Nonseparable Situations," J. Math Anal. and
Appl., 17, 560-576. Also, see Ph.D. Dissertation, University of Minnesota, 1965.

Cameron, S. H. (1960). "An Estimate of the Complexity Requisite in a Universal Decision Network,"
Wright Air Development Division, Report 60-600, 197-212.

Cannon Jr., R. H. (1967). Dynamics of Physical Systems. McGraw-Hill, New York.

Carnahan, B., Luther, H. A., and Wilkes, J. O. (1969). Applied Numerical Methods. John Wiley and Sons,
New York.

Carpenter, G. A. and Grossberg, S. (1987a). "A Massively Parallel Architecture for a Self-Organizing
Neural Pattern Recognition Machine," Computer Vision, Graphics, and Image Processing, 37, 54-115.

Carpenter, G. A. and Grossberg, S. (1987b). "ART2: Self-Organization of Stable Category Recognition


Codes for Analog Input Patterns," Applied Optics, 26(23), 4919-4930.

Carpenter, G. A. and Grossberg, S. (1990). "ART3: Hierarchical Search Using Chemical Transmitters in
Self-Organizing Pattern Recognition Architectures," Neural Networks, 3(2), 129-152.

Carpenter, G. A., Grossberg, S., and Reynolds, J. H. (1991a). "ARTMAP: Supervised Real-Time Learning
and Classification of Nonstationary Data by a Self-Organizing Neural Network," Neural Networks, 4(5),
565-588.

Carpenter, G. A., Grossberg, S., and Rosen, D. B. (1991b). "ART2-A: An Adaptive Resonance Algorithm
for Rapid Category Learning and Recognition," Neural Networks, 4(4), 493-504.
Casasent, D. and Telfer, B. (1987). "Associative Memory Synthesis, Performance, Storage Capacity, and
Updating: New Heteroassociative Memory Results," Proc. SPIE, Intelligent Robots and Computer Vision,
848, 313-333.

Casdagli, M. (1989). "Nonlinear Prediction of Chaotic Time Series," Physica, 35D, 335-356.

Cater, J. P. (1987). "Successfully Using Peak Learning Rates of 10 (and Greater) in Back-Propagation
Networks with the Heuristic Learning Algorithm," in Proc. IEEE First International Conference on Neural
Networks (San Diego 1987), M. Caudill and C. Butler, Editors, vol. II, 645-651. IEEE, New York.

Cauchy, A. (1847). "Méthod Générale pour la Résolution des Systémes d' E'quations Simultanées,"
Comptes Rendus Hebdomadaires des Séances del l' Académie des Sciences, Paris, 25, 536-538.

Caudell, T. P. and Dolan, C. P. (1989). "Parametric Connectivity: Training of Constrained Networks Using
Genetic Algorithms," in Proceedings of the Third International Conference on Genetic Algorithms
(Arlington 1989), J. D. Schaffer, Editor, 370-374. Morgan Kaufmann, San Mateo.

Cetin, B. C., Burdick, J. W., and Barhen, J. (1993a). "Global Descent Replaces Gradient Descent to Avoid
Local Minima Problem in Learning With Artificial Neural Networks," in IEEE International Conference on
Neural Networks (San Francisco 1993), vol. II, 836-842. IEEE, New York.

Cetin, B. C., Barhen, J., and Burdick, J. W. (1993b). "Terminal Repeller Unconstrained Subenergy
Tunneling (TRUST) for Fast Global Optimization," Journal of Optimization Theory and Applications, 77,
97-126.

Chalmers, D. J. (1991). "The Evolution of Learning: An Experiment in Genetic Connectionism," in


Connectionist Models: Proceedings of the 1990 Summer School (Pittsburgh 1990), D. S. Touretzky, J. L.
Elman, and G. E. Hinton, Editors, 81-90. Morgan Kaufmann, San Mateo.

Chan, L. W. and Fallside, F. (1987). "An Adaptive Training Algorithm for Backpropagation Networks,"
Computer Speech and Language, 2, September - December, 205-218.

Changeux, J. P. and Danchin, A. (1976). "Selective Stabilization of Developing Synapses as a Mechanism


for the Specification of Neural Networks," Nature (London), 264, 705-712.

Chauvin, Y. (1989). "A Back-Propagation Algorithm with Optimal Use of Hidden Units," in Advances in
Neural Information Processing Systems 1 (Denver 1988), D. S. Touretzky, Editor, 519-526. Morgan
Kaufmann, San Mateo.

Chiang, T.-S., Hwang, C.-R., and Sheu, S.-J. (1987). "Diffusion for Global Optimization in Rn," SIAM J.
Control Optimization, 25(3), 737-752.

Chiueh, T. D. and Goodman, R. M. (1988). "High Capacity Exponential Associative Memory," in Proc.
IEEE Int. Conference on Neural Networks (San Diego 1988), vol. I, 153-160. IEEE Press, New York.

Chiueh, T. D. and Goodman, R. M. (1991). "Recurrent Correlation Associative Memories," IEEE


Transactions on Neural Networks, 2(2), 275-284.

Cichocki, A. and Unbehauen, R. (1993). Neural Networks for Optimization and Signal Processing. John
Wiley & Sons, Chichester, England.

Cohen, M. A. and Grossberg, S. (1983). "Absolute Stability of Global Pattern Formation and Parallel
Memory Storage by Competitive Neural Networks," IEEE Trans. Systems, Man, and Cybernetics, SMC-13,
815-826.
Cohn, D. and Tesauro, G. (1991). "Can Neural Networks do Better than the Vapnik-Chervonenkis
Bounds?" in Neural Information Processing Systems 3 (Denver, 1990), R. P. Lippmann, J. E. Moody, and
D. S. Touretzky, Editors., 911-917. Morgan Kaufmann, San Mateo.

Cohn, D. and Tesauro, G. (1992). "How Tight are the Vapnik-Chervonenkis Bounds?" Neural
Computation, 4(2), 249-269.

Cooper, P. W. (1962). "The Hypersphere in Pattern Recognition," Information and Control, 5,324-346.

Cooper, P. W. (1966). "A Note on Adaptive Hypersphere Decision Boundary," IEEE Transactions on
Electronic Computers (December 1966), 948-949.

Cortes, C. and Hertz, J. A. (1989). "A Network System for Image Segmentation," in International Joint
Conference on Neural Networks (Washington 1989), vol. I, 121-127. IEEE, New York.

Cotter, N. E. and Guillerm, T. J. (1992). "The CMAC and a Theorem of Kolmogorov," Neural Networks,
5(2), 221-228.

Cottrell, G. W. (1991). "Extracting Features from Faces Using Compression Networks: Face, Identity,
Emotion, and Gender Recognition Using Holons," in Connectionist Models: Proceedings of the 1990
Summer School (Pittsburgh 1990), D. S. Touretzky, J. L. Elman, T. J. Sejnowski, and G. E. Hinton, Editors,
328-337. Morgan Kaufmann, San Mateo.

Cottrell, M. and Fort, J. C. (1986). "A Stochastic Model of Retinotopy: A Self-Organizing Process," Biol.
Cybern., 53, 405-411.

Cottrell, G. W. and Munro, P. (1988). "Principal Component Analysis of Images via Backpropagation,"
invited talk, in Proceedings of the Society of Photo-Optical Instrumentation Engineers (Cambridge 1988),
vol. 1001, 1070-1077.

Cottrell, G. W., Munro, P., and Zipser, D. (1987). "Learning Internal Representations from Gray-Scale
Images: An Example of Extensional Programming," in Ninth Annual Conference of the Cognitive Science
Society (Seattle 1987), 462-473. Erlbaum, Hillsdale.

Cottrell, G. W., Munro, P., and Zipser, D. (1989). "Image Compression by Back Propagation: An Example
of Extensional Programming," in Models of Cognition: A Review of Cognitive Science, vol. 1, N. Sharkey,
Editor. Ablex, Norwood.

Cover, T. M. (1964). Geometrical and Statistical Properties of Linear Threshold Devices, Ph.D.
Dissertation, Tech. Report 6107-1, Stanford Electron. Labs, Stanford University, CA.

Cover, T. M. (1965). "Geometrical and Statistical Properties of Systems of Linear Inequalities with
Applications in Pattern Recognition," IEEE Trans. Elec. Comp., EC-14, 326-334.

Cover, T. M. (1968). "Rates of Convergence of Nearest Neighbor Decision Procedures," Proc. First Annual
Hawaii Conference on Systems Theory, 413-415.

Cover, T. M. and Hart, P. E. (1967). "Nearest Neighbor Pattern Classification," IEEE Transactions on
Information Theory, IT-13(1), 21-27.

Crisanti, A. and Sompolinsky, H. (1987). "Dynamics of Spin Systems with Randomly Asymmetric Bounds:
Langevin Dynamics and a Spherical Model," Phys. Rev. A., 36, 4922.

Crowder III, R. S. (1991). "Predicting the Mackey-Glass Time Series with Cascade-Correlation Learning,"
in Connectionist Models: Proceedings of the 1990 Summer School (Pittsburgh 1990), D. S. Touretzky, J. L.
Elman, T. J. Sejnowski, and G. E. Hinton, Editors, 117-123. Morgan Kaufmann, San Mateo.
Cybenko, G. (1989). "Approximation by Superpositions of a Sigmoidal Function," Math. Control Signals
Systems, 2, 303-314.

Darken, C. and Moody, J. (1991). "Note on Learning Rate Schedules for Stochastic Optimization," in
Neural Information Processing Systems 3 (Denver 1990), R. P. Lippmann, J. E. Moody, and D. S.
Touretzky, Editors, 832-838. Morgan Kaufmann, San Mateo.

Darken, C. and Moody, J. (1992). "Towards Faster Stochastic Gradient Search," in Neural Information
Processing Systems 4 (Denver, 1991), J. E. Moody, S. J. Hanson, and R. P. Lippmann, Editors, 1009-1016.
Morgan Kaufmann, San Mateo.

Davis, L. (1987). Genetic Algorithms and Simulated Annealing. Pitman, London, England.

Davis, T. E. and Principe, J. C. (1993). "A Markov Chain Framework for the Simple Genetic Algorithm,"
Evolutionary Computation, 1(3), 269-288.

D'Azzo, J. J. and Houpis, C. H. (1988). Linear Control Systems Analysis and Design (3rd edition).
McGraw-Hill, New York.

De Jong, K. (1975). An Analysis of the Behavior of a Class of Genetic Adaptive Systems. Doctoral Thesis,
Department of Computer and Communications Sciences, University of Michigan, Ann Arbor.

De Jong, K. and Spears, W. (1993). "On the State of Evolutionary Computation," in Proceedings of the
Fifth International Conference on Genetic Algorithms (Urbana-Champaign 1993), S. Forrest, Editor, 618-
623. Morgan Kaufmann, San Mateo.

Dembo, A. and Zeitouni, O. (1988). "High Density Associative Memories," in Neural Information
Processing Systems (Denver 1987), D. Z. Anderson, Editor, 211-212. American Institute of Physics, New
York.

DeMers, D. and Cottrell, G. (1993). "Non-Linear Dimensionality Reduction," in Advances in Neural


Information Processing Systems 5 (Denver 1992), S. J. Hanson, J. D. Cowan, and C. L. Giles, Editors, 550-
587. Morgan Kaufmann, San Mateo.

Dennis Jr., J. E. and Schnabel, R. B. (1983). Numerical Methods for Unconstrained Optimization and
Nonlinear Equations. Prentice-Hall, Englewood Cliffs.

Denoeux, T. and Lengellé, R. (1993). "Initializing Back Propagation Networks with Prototypes," Neural
Networks, 6(3), 351-363.

Derthick, M. (1984). "Variations on the Boltzmann Machine," Technical Report CMU-CS-84-120,


Department of Computer Science, Carnegie Mellon University, Pittsburgh, PA.

Dertouzos, M. L. (1965). Threshold Logic: A Synthesis Approach. MIT Press, Cambridge, MA.

Dickinson, B. W. (1991). Systems: Analysis, Design, and Computation. Prentice-Hall, Englewood Cliffs,
CA.

Dickmanns, E. D. and Zapp, A. (1987). "Autonomous High Speed Road Vehicle Guidance by Computer
Vision," in Proceedings of the 10th World Congress on Automatic Control (Munich, West Germany 1987),
vol. 4, 221-226.

Drucker, H. and Le Cun, Y. (1992), "Improving Generalization Performance Using Double


Backpropagation," IEEE Transactions on Neural Networks, 3(6), 991-997.

Duda, R. O. and Hart, P. E. (1973). Pattern Classification and Scene Analysis. John Wiley. New York.
Duda, R. O. and Singleton, R. C. (1964). "Training a Threshold Logic Unit with Imperfect Classified
Patterns," IRE Western Electric Show and Convention Record, Paper 3.2.

Durbin, R. and Willshaw, D. (1987). "An Analogue Approach to The Traveling Salesman Problem Using
an Elastic Net Method," Nature, 326, 689-691.

Efron, D. (1964). "The Perceptron Correction Procedure in Non-Separable Situations," Tech. Report. No.
RADC-TDR-63-533.

Elamn, J. L. and Zipser, D. (1988). "Learning the Hidden Structure of Speech," Journal of Acoustical
Society of America, 83, 1615-1626.

Everitt, B. S. (1980). Cluster Analysis (2nd edition). Heinemann Educational Books, London.

Fahlman, S. E. (1989). "Fast Learning Variations on Back-Propagation: An Empirical Study," in


Proceedings of the 1988 Connectionist Models Summer School (Pittsburgh 1988), D. Touretzky, G. Hinton,
and T. Sejnowski, Editors, 38-51. Morgan Kaufmann, San Mateo.

Fahlman, S. E. and Lebiere, C. (1990). "The Cascade-Correlation Learning Architecture," in Advances in


Neural Information Processing Systems 2 (Denver 1989), D. S. Touretzky, Editor, 524-532. Morgan
Kaufmann, San Mateo.

Fakhr, W. (1993). "Optimal Adaptive Probabilistic Neural Networks for Pattern Classification," Ph.D.
Thesis, Department of Electrical and Computer Engineering, University of Waterloo, Waterloo, Canada.

Fang, Y. and Sejnowski, T. J. (1990). "Faster Learning for Dynamic Recurrent Backpropagation," Neural
Computation, 2(3), 270-273.

Farden, D. C. (1981). "Tracking Properties of Adaptive Signal Processing Algorithms," IEEE Trans.
Acoust. Speech Signal Proc., ASSP-29, 439-446.

Farhat, N. H. (1987). "Optoelectronic Analogs of Self-Programming Neural Nets: Architectures and


Methods for Implementing Fast Stochastic Learning by Simulated Annealing," Applied Optics, 26, 5093-
5103.

Feigenbaum, M. (1978). "Quantitative Universality for a Class of Nonlinear Transformations," J. Stat.


Phys., 19, 25-52.

Fels, S. S. and Hinton, G. E. (1993). "Glove-Talk: A Neural Network Interface Between a Data-Glove and a
Speech Synthesizer," IEEE Transactions on Neural Networks, 4(1), 2-8.

Finnoff, W. (1993). "Diffusion Approximations for the Constant Learning Rate Backpropagation Algorithm
and Resistance to Local Minima," in Advances in Neural Information Processing Systems 5 (Denver 1992),
S. J. Hanson, J. D. Cowan, and C. L. Giles, Editors, 459-466. Morgan Kaufmann, San Mateo.

Finnoff, W. (1994). "Diffusion Approximations for the Constant Learning Rate Backpropagation Algorithm
and Resistance to Local Minima," Neural Computation, 6(2), 285-295.

Finnoff, W., Hergert, F., Zimmermann, H. G. (1993). "Improving Model Selection by Nonconvergent
Methods," Neural Networks, 6(5), 771-783.

Fisher, M. L. (1981). "The Lagrangian Relaxation Method for Solving Integer Programming Problems,"
Manag. Sci., 27(1), 1-18.

Fix, E. and Hodges Jr., J. L. (1951). "Discriminatory Analysis: Non-parametric Discrimination," Report 4,
Project 21-49-004, USAF School of Aviation Medicine, Randolph Field, Texas.
Franzini, M. A. (1987). "Speech Recognition with Back Propagation," in Proceedings of the Ninth Annual
Conference of the IEEE Engineering in Medicine and Biology Society (Boston 1987), 1702-1703. IEEE,
New York.

Frean, M. (1990). "The Upstart Algorithm: A Method for Constructing and Training Feedforward Neural
Networks," Neural Computation, 2(2), 198-209.

Fritzke, B. (1991). "Let it Grow - Self-Organizing Feature Maps with Problem Dependent Cell Structure,"
in Artificial Neural Networks, Proc. of the 1991 Int. Conf. on Artificial Neural Networks (Espoo 1991), T.
Kohonen, K. Mäkisara, O. Simula, and J. Kangas, Editors, vol. I, 403-408. Elsevier Science Publishers B.
V., Amsterdam.

Funahashi, K.-I. (1989). "On the Approximate Realization of Continuous Mappings by Neural Networks,"
Neural Networks, 2(3), 183-192.

Funahashi, K.-I. (1990). "On the Approximate Realization of Identity Mappings by 3-Layer Neural
Networks (in Japanese," Trans. IEICE A, J73-A, 139-145.

Funahashi, K.-I. and Nakamura, Y. (1993). "Approximation of Dynamical Systems by Continuous Time
Recurrent Neural Networks," Neural Networks, 6(6), 801-806.

Galland, C. C. and Hinton, G. E. (1991). "Deterministic Boltzmann Learning in Networks with Asymmetric
Connectivity," in Connectionist Models: Proceeding of the 1990 Summer School (Pittsburgh 1990), D. S.
Touretzky, J. L. Elman, T. J. Sejnowski, and G. E. Hinton, Editors, 3-9. Morgan Kaufmann, San Mateo.

Gallant, S. I. (1993). Neural Network Learning and Expert Systems. MIT Press, Cambridge, MA.

Gallant, S. I. and Smith, D. (1987). "Random Cells: An Idea Whose Time Has Come and Gone ... and
Come Again?" Proceedings of the IEEE International Conference on Neural Networks (San Diego 1987),
vol. II, 671-678.

Gamba, A. (1961). "Optimum Performance of Learning Machines," Proc. IRE, 49, 349-350.

Gantmacher, F. R. (1990). The Theory of Matrices, vol. 1, 2nd edition. Chelsea, New York.

Gardner, E. (1986). "Structure of Metastable States in Hopfield Model," J. Physics A, 19, L1047-1052.

Garey, M. R. and Johnson (1979). Computers and Intractability: A Guide to the Theory of NP-
Completeness. Freeman, New York.

Gelfand, S. B. and Mitter, S. K. (1991). "Recursive Stochastic Algorithms for Global Optimization in Rd,"
SIAM J. Control and Optimization, 29(5), 999-1018.

Geman, S. (1979). "Some Averaging and Stability Results for Random Differential Equations," SIAM J.
Applied Math., 36, 86-105.

Geman, S. (1980). "A Limit Theorem for the Norm of Random Matrices," Ann. Prob., 8, 252-261.

Geman, S. (1982). "Almost Sure Stable Oscillations in a Large System of Randomly Coupled Equations,"
SIAM J. Appl. Math., 42(4), 695-703.

Geman, S. and Geman, D. (1984). "Stochastic Relaxation, Gibbs Distributions, and The Bayesian
Restoration of Images," IEEE Trans. Pattern Analysis and Machine Intelligence, 6, 721-741.

Geman, S. and Hwang, C. R. (1986). "Diffusions for Global Optimization," SIAM J. of Control and
Optimization, 24(5), 1031-1043.
Gerald, C. F. (1978). Applied Numerical Analysis (2nd edition). Addison-Wesley, Reading, MA.

Geszti, T. (1990). Physical Models of Neural Networks. World Scientific, Singapore.

Gill, P. E., Murray, W., and Wright, M. H. (1981). Practical Optimization. Academic Press, New York.

Girosi, F. and Poggio, T. (1989). "Representation Properties of Networks: Kolmogorov's Theorem is


Irrelevant," Neural Computation, 1(4), 465-469.

Glanz, F. H. and Miller, W. T. (1987). "Shape Recognition Using a CMAC Based Learning System," Proc.
SPIE, Robotics and Intelligent Systems, 848, 294-298.

Glanz, F. H. and Miller, W. T. (1989). "Deconvolution and Nonlinear Inverse Filtering Using a Neural
Network," Proceedings of the International Conference on Acoustics and Signal Processing, vol. 4, 2349-
2352.

Glauber, R. J. (1963). "Time Dependent Statistics of the Ising Model," Journal of Mathematical Physics, 4,
294-307.

Goldberg, D. (1989). Genetic Algorithms. Addison-Wesley, Reading, MA.

Golden, R. M. (1986). "The "Brain-State-in-a-Box" Neural Model is a Gradient Descent Algorithm,"


Journal of Mathematical Psychology, 30, 73-80.

Golden, R. M. (1993). "Stability and Optimization Analysis of the Generalized Brain-State-in-a-Box Neural
Network Model," J. Math. Psychol., 37, 282-298.

Goldman, L., Cook, E. F., Brand, D. A., Lee, T. H., Rouan, G. W., Weisberg, M. C., Acampora, D.,
Stasiulewicz, C., Walshon, J., Gterranova, G., Gottlieb, L., Kobernick, M., Goldstein-Wayne, B., Copen,
D., Daley, K., Brandt, A. A., Jones, D., Mellors, J., Jakubowski, R. (1988). "A Computer Protocol to
Predict Myocardial Infarction in Emergency Department Patients with Chest Pain," N. Engl. J. Med., 318,
797-803.

Gonzalez, R. C. and Wintz, P. (1987). Digital Image Processing, 2nd edition. Addison-Wesley, Reading,
MA.

Gordon, M. B., Peretto, P., and Berchier, D. (1993). "Learning Algorithms for Perceptrons from Statistical
Physics," J. Physique I, 3, 377-387.

Gorse, D. and Shepherd, A. (1992). "Adding Stochastic Search to Conjugate Gradient Algorithms," in
Proceedings of 3rd International Conf. on Parallel Applications in Statistics and Economics, Praque: Tisk

renfk Zacody.

Gray, R. M. (1984). "Vector Quantization," IEEE ASSP Magazine, 1, 4-29.

Greenberg, H. J. (1988). "Equilibria of the Brain-State-in-a-Box (BSB) Neural Model," Neural Networks,
1(4), 323-324.

Grefenstette, J. J. (1986). "Optimization of Control Parameters for Genetic Algorithms," IEEE Trans. on
System, Man and Cybernetics, SMC-16, 122-128.

Grossberg, S. (1969). "On Learning and Energy-Entropy Dependence in Recurrent and Nonrecurrent
Signed Networks," Journal of Statistical Physics, 1, 319-350.
Grossberg, S. (1976). "Adaptive Pattern Classification and Universal Recording: I. Parallel Development
and Coding of Neural Feature Detectors," Biological Cybernetics, 23, 121-134.

Grossberg, S. (1976). "Adaptive Pattern Classification and Universal Recording, II: Feedback, Expectation,
Olfaction, and Illusions," Biological Cybernetics, 23, 187-202.

Hanson, S. J. and Burr, D. J. (1987). "Knowledge Representation in Connectionist Networks," Bellcore


Technical Report.

Hanson, S. J. and Burr, D. J. (1988). "Minkowski-r Back-Propagation: Learning in Connectionist Models


with Non-Euclidean Error Signals," in Neural Information Processing Systems (Denver 1987), D. Z.
Anderson, Editor, 348-357. American Institute of Physics, New York.

Hanson, S. J. and Pratt, L. (1989). "A Comparison of Different Biases for Minimal Network Construction
with Back-Propagation," in Advances in Neural Information Processing Systems 1 (Denver 1988), D. S.
Touretzky, Editor, 177-185. Morgan Kaufmann, San Mateo.

Hardy, G., Littlewood, J., and Polya, G. (1952). Inequalities. Cambridge University Press, Cambridge,
England.

Harp, S. A., Samad, T., and Guha, A. (1989). "Towards the Genetic Synthesis of Neural Networks," in
Proceedings of the Third International Conference on Genetic Algorithms (Arlington 1989), J. D. Schaffer,
Editor, 360-369. Morgan Kaufmann, San Mateo.

Harp, S. A., Samad, T., and Guha, A. (1990). "Designing Application-Specific Neural Networks Using the
Genetic Algorithms," in Advances in Neural Information Processing Systems 2 (Denver 1989), D. S.
Touretzky, Editor, 447-454. Morgan Kaufmann, San Mateo.

Hartigan, J. A. (1975). Clustering Algorithms. John Wiley & Sons, New York.

Hartman, E. J. and Keeler, J. D. (1991a). "Semi-local Units for Prediction," in Proceedings of the
International Joint Conference on Neural Networks (Seattle 1991), vol. II, 561-566. IEEE, New York.

Hartman, E. J. and Keeler, J. D. (1991b). "Predicting the Future: Advantages of Semilocal Units," Neural
Computation, 3(4), 566-578.

Hartman, E. J., Keeler, J. D., and Kowalski, J. M. (1990). "Layered Neural Networks with Gaussian Hidden
Units as Universal Approximators," Neural Computation, 2(2), 210-215.

Hassoun, M. H. (1988). "Two-Level Neural Network for Deterministic Logic Processing," Proc. SPIE,
Optical Computing and Nonlinear Materials, 881, 258-264.

Hassoun, M. H. (1989a). "Adaptive Dynamic Heteroassociative Neural Memories for Pattern


Classification," in Proc. SPIE, Optical Pattern Recognition, H-K Liu, Editor, 1053, 75-83.

Hassoun, M. H. (1989b). "Dynamic Heteroassociative Neural Memories," Neural Networks, 2(4), 275-287.

Hassoun, M. H., Editor (1993). Associative Neural Memories: Theory and Implementation. Oxford
University Press, New York, NY.

Hassoun, M. H. and Clark, D. W. (1988). "An Adaptive Attentive Learning Algorithm for Single-Layer
Neural Networks," Proc. IEEE Annual Conf. Neural Networks, vol. I, 431-440.

Hassoun, M. H. and Song, J. (1992). "Adaptive Ho-Kashyap Rules for Perceptron Training," IEEE
Transactions on Neural Networks, 3(1), 51-61.
Hassoun, M. H. and Song, J. (1993a). "Multilayer Perceptron Learning Via Genetic Search for Hidden
Layer Activations," in Proceedings of the World Congress on Neural Networks (Portland 1993), vol. III,
437-444.

Hassoun, M. H. and Song, J. (1993b). "Hybrid Genetic/Gradient Search for Multilayer Perceptron
Training." Optical Memory and Neural Networks, Special Issue on Architectures, Designs, Algorithms and
Devices for Optical Neural Networks (Part 1), 2(1), 1-15.

Hassoun, M. H. and Spitzer, A. R. (1988). "Neural Network Identification and Extraction of Repetitive
Superimposed Pulses in Noisy 1-D Signals," Neural Networks, 1, Supplement 1: Abstracts of the First
Annual Meeting of the International Neural Networks Society (Boston 1988), 443. Pergamon Press, New
York.

Hassoun, M. H., and Youssef, A. M. (1989). "A High-Performance Recording Algorithm for Hopfield
Model Associative Memories," Optical Engineering, 28(1), 46-54.

Hassoun, M. H., Song, J., Shen, S.-M., and Spitzer, A. R. (1990). "Self-Organizing Autoassociative
Dynamic Multiple-Layer Neural Net for the Decomposition of Repetitive Superimposed Signals,"
Proceedings of the International Joint Conference on Neural Networks (Washington, D. C. 1990), vol. I,
621-626. IEEE, New York.

Hassoun, M. H., Wang, C., and Spitzer, A. R. (1992). "Electromyogram Decomposition via Unsupervised
Dynamic Multi-Layer Neural Network," in Proceedings of the International Joint Conference on Neural
Networks (Baltimore 1992), vol. II, 405-412. IEEE, New York.

Hassoun, M. H., Wang, C., and Spitzer, A. R. (1994a). "NNERVE: Neural Network Extraction of
Repetitive Vectors for Electromyography, Part I: Algorithm," IEEE Transactions on Biomedical
Engineering, XXX to appear XXX.

Hassoun, M. H., Wang, C., and Spitzer, A. R. (1994b). "NNERVE: Neural Network Extraction of
Repetitive Vectors for Electromyography, Part II: Performance Analysis," IEEE Transactions on
Biomedical Engineering, XXX to appear XXX.

Haussler, D., Kearns, M., Opper, M., and Schapire, R. (1992). "Estimating Average-Case Learning Curves
Using Bayesian, Statistical Physics and VC Dimension Methods," in Neural Information Processing
Systems 4 (Denver 1991), J. E. Moody, S. J. Hanson, and R. P. Lippmann, Editors., 855-862. Morgan
Kaufmann, San Mateo.

Hebb, D. (1949). The Organization of Behavior. Wiley, New York.

Hecht-Nielsen, R. (1987). "Kolmogorov's Mapping Neural Network Existence Theorem," in Proc. Int.
Conf. Neural Networks (San Diego 1987), vol. III, 11-14, IEEE Press, New York.

Hecht-Nielsen, R. (1990). Neurocomputing. Addison-Wesley, Reading, MA.

van Hemman, J. L., Ioffe, L. B., and Vaas, M. (1990). "Increasing the Efficiency of a Neural Network
through Unlearning," Physica, 163A, 368-392.

Hergert, F., Finnoff, W., and Zimmermann, H. G. (1992). "A Comparison of Weight Elimination Methods
for Reducing Complexity in Neural Networks," in Proceedings of the International Joint Conference on
Neural Networks (Baltimore 1992), vol. III, 980-987. IEEE, New York.

Hertz, J., Krogh, A., and Palmer, R. G. (1991). Introduction to the Theory of Neural Computation. Addison-
Wesley, New York.

Heskes, T. M. and Kappen, B. (1991). "Learning Processes in Neural Networks," Physical Review A, 44(4),
2718-2726.
Heskes, T. M. and Kappen, B. (1993a). "Error Potentials for Self-Organization," in IEEE International
Conference on Neural Networks (San Francisco 1993), vol. III, 1219-1223. IEEE, New York.

Heskes, T. M. and Kappen, B. (1993)b. "On-Line Learning Processes in Artificial Neural Networks," in
Mathematical Approaches to Neural Networks, J. G. Taylor, Editor, 199-233. Elsevier Science Publishers
B. V., Amsterdam.

Hestenes, M. R. and Stiefel, E. (1952). "Methods of Conjugate Gradients for Solving Linear Systems," J.
Res. Nat. Bur. Standards, 49, 409-436.

Hinton, G. E. (1986). "Learning Distributed Representations of Concepts," in Proceedings of the 8th


Annual Conference of the Cognitive Science Society (Amherst 1986), 1-12. Erlbaum, Hillsdale.

Hinton, G. E. (1987a). "Connectionist Learning Procedures," Technical Report CMU-CS-87-115, Carnegie-


Mellon University, Computer Science Department, Pittsburgh, PA.

Hinton, G. E. (1987b). "Learning Translation Invariant Recognition in a Massively Parallel Network," in


PARLE: Parallel Architectures and Languages, Europe Lecture Notes in Computer Science, G. Goos and J.
Hartmanis, Editors, 1-13. Springer-Verlag, Berlin.

Hinton, G. E. and Nowlan, S. J. (1987). "How Learning can Guide Evolution," Complex Systems, 1, 495-
502.

Hinton, G. E. and Sejnowski, T. J. (1983). "Optimal Perceptual Inference," in Proceedings of the IEEE
Conference on Computer Vision and Pattern Recognition (Washington 1983), 448-453. IEEE, New York.

Hinton, G. E. and Sejnowski, T. J. (1986). "Learning and Relearning in Boltzmann Machines," in Parallel
Distributed Processing: Explorations in the Microstructure of Cognition, vol. I, D. E. Rumelhart, J. L.
McClelland, and the PDP Research Group, MIT Press, Cambridge (1986).

Hirsch, M. W. (1989). "Convergent Activation Dynamics in Continuous Time Networks," Neural


Networks, 2(5), 331-349.

Hirsch, M. and Smale, S. (1974). Differential Equations, Dynamical Systems, and Linear Algebra.
Academic Press. New York.

Ho, Y.-C. and Kashyap, R. L. (1965). "An Algorithm for Linear Inequalities and its Applications," IEEE
Trans. Elec. Comp., EC-14, 683-688.

Holland, J. H. (1975). Adaptation in Natural and Artificial Systems. The University of Michigan Press, Ann
Arbor, Michigan. Reprinted as a second edition (1992). MIT Press, Cambridge.

Holland, J. H. (1986). "Escaping Brittleness: The Possibilities of General-Purpose Learning Algorithms


applied to Parallel Rule Based Systems," in Machine Learning: An Artificial Intelligence Approach, vol. 2,
R. Michalski, J. Carbonell, and T. Mitchell, Editors, 593-623. Morgan Kaufmann, San Mateo.

Holland, J. H. and Reitman, J. S. (1978). "Cognitive Systems Based on Adaptive Algorithms," in Pattern
Directed Inference Systems, D. A. Waterman and F. Hayes-Roth, Editors, 313-329. Academic Press, New
York.

Hopfield, J. J. (1982). "Neural Networks and Physical Systems with Emergent Collective Computational
Abilities," Proc. National Academy Sciences, USA, 79, 2445-2558.

Hopfield, J. J. (1984). "Neurons with Graded Response have Collective Computational Properties like
Those of Two-State Neurons," Proc. National Academy Sciences, USA, 81, 3088-3092.
Hopfield, J. J. (1987). "Learning Algorithms and Probability Distributions in Feed-Forward and Feed-Back
Networks," Proceedings of the National Academy of Sciences, USA, 84, 8429-8433.

Hopfield, J. J. (1990). "The Effectiveness of Analogue 'Neural Network' Hardware," Network: Computation
in Neural Systems, 1(1), 27-40.

Hopfield, J. J. and Tank, (1985). "Neural Computation of Decisions in Optimization Problems," Biological
Cybernetics, 52, 141-152.

Hopfield, J. J., Feinstein, D. I., and Palmer, R. G. (1993). " "Unlearning" Has a Stabilizing Effect in
Collective Memories," Nature, 304, 158-159.

Hoptroff, R. G. and Hall, T. J. (1989). "Learning by Diffusion for Multilayer Perceptron," Electronic
Letters, 25(8), 531-533.

Hornbeck, R. W. (1975). Numerical Methods. Quantum, New York.

Hornik, K. (1991). "Approximation Capabilities of Multilayer Feedforward Networks," Neural Networks,


4(2), 251-257.

Hornik, K. (1993). "Some New Results on Neural Network Approximation," Neural Networks, 6(8), 1069-
1072.

Hornik, K., Stinchcombe, M., and White, H. (1989). "Multilayer Feedforward Networks are Universal
Approximators," Neural Networks, 2(5), 359-366.

Hornik, K., Stinchcombe, M., and White, H. (1990). "Universal Approximation of an Unknown Mapping
and its Derivatives Using Multilayer Feedforward Networks," Neural Networks, 3(5), 551-560.

Horowitz, L. L. and Senne, K. D. (1981). "Performance Advantage of Complex LMS for Controlling
Narrow-Band Adaptive Arrays," IEEE Trans. Circuits Systems, CAS-28, 562-576.

Hoshino, T., Yonekura, T., Matsumoto, T., and Toriwaki, J. (1990). "Studies of PCA Realized by 3-Layer
Neural Networks Realizing Identity Mapping (in Japanese)," PRU90-54, 7-14.

Hoskins, J. C., Lee, P., and Chakravarthy, S. V. (1993). "Polynomial Modeling Behavior in Radial Basis
Function Networks," in Proc. World Congress on Neural Networks (Portland 1993), vol. IV, 693-699. LEA,
Hillsdale.

Householder, A. S. (1964). The Theory of Matrices in Numerical Analysis. Blaisdel, New York. (Reprinted,
1975, by Dover, New York.)

Hu, S. T. (1965). Threshold Logic. University of California Press, Berkeley, CA.

Huang, W. Y. and Lippmann, R. P. (1988). "Neural Nets and Traditional Classifiers," in Neural
Information Processing Systems (Denver 1987), D. Z. Anderson, Editor. American Institute of Physics,
New York, 387-396.

Huang, Y. and Schultheiss, P. M. (1963). "Block Quantization of Correlated Gaussian Random Variables,"
IEEE Trans. Commun. Syst., CS-11, 289-296.

Huber, P. J. (1981). Robust Statistics. Wiley, New York.

Hudak, M. J. (1992). "RCE Classifiers: Theory and Practice," Cybernetics and Systems: An International
Journal, 23, 483-515.
Hueter, G. J. (1988). "Solution of the Traveling Salesman Problem with an Adaptive Ring," in IEEE
International Conference on Neural Networks (San Diego 1988), vol. I, 85-92. IEEE, New York.

Hui, S. and ak, S. H. (1992). "Dynamical Analysis of the Brain-State-in-a-Box Neural Models," IEEE
Transactions on Neural Networks, 3, 86-89.

Hui, S., Lillo, W. E. and ak, S. H. (1993). "Dynamics and Stability Analysis of the Brain-State-in-a-Box
(BSB) Neural Models," in Associative Neural Memories: Theory and Implementation, M. H. Hassoun,
Editor. Oxford Univ. Press, NY.

Hush, D. R., Salas, J. M., and Horne, B. (1991). "Error Surfaces for Multi-layer Perceptrons," in
International Joint Conference on Neural Networks (Seattle 1991), vol. I, 759-764, IEEE, New York.

Irie, B. and Miyake, S. (1988). "Capabilities of Three-Layer Perceptrons," IEEE Int. Conf. Neural
Networks, vol. I, 641-648.

Ito, Y. (1991). "Representation of Functions by Superpositions of Step or Sigmoid Function and Their
Applications to Neural Network Theory," Neural Networks, 4(3), 385-394.

Jacobs, R. A. (1988). "Increased Rates of Convergence Through Learning Rate Adaptation," Neural
Networks, 1(4), 295-307.

Johnson, D. S., Aragon, C. R., McGeoch, L. A., and Schevon, C. (1989). "Optimization by Simulated
Annealing: An Experimental Evaluation; Part I, Graph Partitioning," Operations Research, 37(6), 865-892.

Johnson, D. S., Aragon, C. R., McGeoch, L. A., and Schevon, C. (1990a). "Optimization by Simulated
Annealing: An Experimental Evaluation; Part II, Graph Coloring and Number Partitioning," Operations
Research, 39(3), 378-406.

Johnson, R. A. and Wichern, D. W. (1988). Applied Multivariate Statistical Analysis (2nd edition),
Prentice-Hall, Englewood Cliffs, NJ.

Jones, R. D., Lee, Y. C., Barnes, C. W., Flake, G. W., Lee, K., Lewis, P. S., and Qian, S. (1990). "Function
Approximation and Time Series Prediction with Neural Networks," in Proc. Intl. Joint Conference on
Neural Networks (San Diego 1990), vol. I, 649-666. IEEE Press, New York.

Judd, J. S. (1987). "Learning in Networks is Hard," in IEEE First International Conference on Neural
Networks (San Diego 1987), M. Caudill and C. Butler, Editors, vol. II, 685-692. IEEE, New York.

Judd, J. S. (1990). Neural Network Design and the Complexity of Learning. MIT Press, Cambridge.

Kadirkamanathan, V., Niranjan, M., and Fallside, F. (1991). "Sequential Adaptation of Radial Basis
Function Neural Networks and its Application to Time-Series Prediction," in Advances in Neural
Information Processing Systems 3 (Denver 1990) R. P. Lippmann, J. E. Moody, and D. S. Touretzky,
Editors, 721-727. Morgan Kaufmann, San Mateo.

Kamimura, R. (1993). "Minimum Entropy Method for the Improvement of Selectivity and Interpretability,"
in Proc. World Congress on Neural Networks (Portland 1993), vol. III, 512-519. LEA, Hillsdale.

Kanerva, P. (1988). Sparse Distributed Memory. Bradford/MIT Press, Cambridge, MA.

Kanerva, P. (1993). "Sparse Distributed Memory and Other Models," in Associative Neural Memories:
Theory and Implementation, M. H. Hassoun, Editor, 50-76. Oxford University Press, New York.
Kanter, I. and Sompolinsky, H. (1987). "Associative Recall of Memory Without Errors," Phys. Rev. A., 35,
380-392.

Karhunen, J. (1994). "Optimization Criteria and Nonlinear PCA Neural Networks," IEEE International
Conference on Neural Networks, (Orlando 1994), vol. XXX, XXXpage numbersXXX, IEEE Press.

Karhunen, K. (1947). "Uber lineare methoden in der Wahrscheinlichkeitsrechnung," Annales Academiae


Scientiarium Fennicae, A, 37(1), 3-79, (translated by RAND Corp., Santa Monica, CA, Rep. T-131, 1960).

Karmarkar, N. (1984). "A New Polynomial Time Algorithm for Linear Programming," Combinatorica, 1,
373-395.

Karnaugh, M. (1953). "A Map Method for Synthesis of Combinatorial Logic Circuits," Transactions AIEE,
Comm and Electronics, 72, Part I, 593-599.

Kashyap, R. L. (1966). "Synthesis of Switching Functions by Threshold Elements," IEEE Trans. Elec.
Comp., EC-15(4), 619-628.

Kaszerman, P. (1963). "A Nonlinear Summation Threshold Device," IEEE Trans. Elec. Comp., EC-12,
914-915.

Katz, B. (1966). Nerve, Muscle and Synapse. McGraw-Hill, New York.

Keeler, J. and Rumelhart, D. E. (1992). "A Self-Organizing Integrated Segmentation and Recognition
Neural Network," in Advances in Neural Information Processing Systems 4 (Denver 1991), J. E. Moody, S.
J. Hanson, and R. P. Lippmann, Editors, 496-503. Morgan Kaufmann, San Mateo.

Keeler, J. D., Rumelhart, D. E., and Leow, W.-K. (1991). "Integrated Segmentation and Recognition of
Handprinted Numerals," in Advances in Neural Information Processing Systems 3 (Denver 1990), R. P.
Lippmann, J. E. Moody, and D. S. Touretzky, Editors, 557-563. Morgan Kaufmann, San Mateo.

Keesing, R. and Stork, D. G. (1991). "Evolution and Learning in Neural Networks: The Number and
Distribution of Learning Trials Affect the Rate of Evolution," in Advances in Neural Information
Processing Systems 3 (Denver 1990), R. P. Lippmann, J. E. Moody, and D. S. Touretzky, Editors, 804-810.
Morgan Kaufmann, San Mateo.

Kelly, H. J. (1962). "Methods of Gradients," in Optimization Techniques with Applications to Aerospace


Systems, C. Leitmann, Editor. Academic Press, New York.

Khachian, L. G. (1979). "A Polynomial Algorithm in Linear Programming," Soviet Mathematika Doklady,
20, 191-194.

Kirkpatrick, S. (1984). "Optimization by Simulated Annealing: Quantitative Studies," J. Statist. Physics, 34,
975-986.

Kirkpatrick, S., Gilatt, C. D., and Vecchi, M. P. (1983). "Optimization by Simulated Annealing," Science,
220, 671-680.

Kishimoto, K. and Amari, S. (1979). "Existence and Stability of Local Excitations in Homogeneous Neural
Fields," J. Math. Biology, 7, 303-318.

Knapp, A. G. and Anderson, J. A. (1984). "A Theory of Categorization Based on Distributed Memory
Storage," Journal of Experimental Psychology: Learning, Memory, and Cognition, 9, 610-622.

Kohavi, Z. (1978). Switching and Finite Automata. McGraw-Hill, NY.


Kohonen, T. (1972). "Correlation Matrix Memories," IEEE Trans. Computers, C-21, 353-359.

Kohonen, T. (1974). "An Adaptive Associative Memory Principle," IEEE Trans. Computers, C-23, 444-
445.

Kohonen, T. (1982a). "Self-Organized Formation of Topologically Correct Feature Maps," Biological


Cybernetics, 43, 59-69.

Kohonen, T. (1982b). "Analysis of Simple Self-Organizing Process," Biological Cybernetics, 44, 135-140.

Kohonen, T. (1984). Self-Organization and Associative Memory. Springer-Verlag, Berlin.

Kohonen, T. (1988). "The 'Neural' Phonetic Typewriter," IEEE Computer Magazine, March 1988, 11-22.

Kohonen, T. (1989). Self-Organization and Associative Memory (3rd ed.). Springer-Verlag, Berlin.

Kohonen, T. (1990). "Improved Versions of Learning Vector Quantization," in Proceedings of the


International Joint Conference on Neural Networks (San Diego 1990), vol. I, 545-550. IEEE, New York.

Kohonen, T. (1991). "Self-Organizing Maps: Optimization Approaches," in Artificial Neural Networks, T.


Kohonen, K. Makisara, O. Simula, and J. Kanga, Editors, 981-990. North-Holland, Amsterdam.

Kohonen, T. (1993a). "Things You Haven't Heard About the Self-Organizing Map," IEEE International
Conference on Neural Networks (San Francisco 1993), vol. III, 1147-1156. IEEE, New York.

Kohonen, T. (1993b). "Physiological Interpretation of the Self-Organizing Map Algorithm," Neural


Networks, 6(7), 895-905.

Kohonen, T. and Ruohonen, M. (1973). "Representation of Associated Data by Matrix Operators," IEEE
Trans. Computers, C-22, 701-702.

Kohonen, T., Barna, G., and Chrisley, R. (1988). "Statistical Pattern Recognition with Neural Networks:
Benchmarking Studies," in IEEE International Conference on Neural Networks (San Diego 1988), vol. I,
61-68. IEEE, New York.

Kolen, J. F. and Pollack, J. B. (1991). "Back Propagation is Sensitive to Initial Conditions," in Advances in
Neural Information Processing Systems 3 (Denver 1990). R. P. Lippmann, J. E. Moody, and D. S.
Touretzky, Editors, 860-867. Morgan Kaufmann, San Mateo.

Kolmogorov, A. N. (1957). "On the Representation of Continuous Functions of Several Variables by


Superposition of Continuous Functions of one Variable and Addition," Doklady Akademii. Nauk USSR,
114, 679-681.

Komlós, J. (1967). On the Determinant of (0,1) Matricies. Studia Scientarium Mathematicarum Hungarica,
2, 7-21.

Komlós, J. and Paturi, R. (1988). "Convergence Results in an Associative Memory Model," Neural
Networks, 3(2), 239-250.

Kosko, B. (1987). "Adaptive Bidirectional Associative Memories," Applied Optics, 26, 4947-4960.

Kosko, B. (1988). "Bidirectional Associative Memories," IEEE Trans. Sys. Man Cybern., SMC-18, 49-60.

Kosko, B. (1992). Neural Networks and Fuzzy Systems: A Dynamical Systems Approach to Machine
Intelligence. Prentice-Hall, Englewood.
Kramer, A. H. and Sangiovanni-Vincentelli, A. (1989). "Efficient Parallel Learning Algorithms for Neural
Networks," in Advances in Neural Information Processing Systems 1 (Denver 1988) D. S. Touretzky,
Editor, 40-48. Morgan Kaufmann, San Mateo.

Kramer, M. (1991). "Nonlinear Principal Component Analysis Using Autoassociative Neural Networks,"
AICHE Journal, 37, 233-243.

Krauth, W., Mézard, M., and Nadal, J.-P. (1988). "Basins of Attraction in a Perceptron Like Neural
Network," Complex Systems, 2, 387-408.

Krekelberg, B. and Kok, J. N. (1993). "A Lateral Inhibition Neural Network that Emulates a Winner-Takes-
All Algorithm," in Proc. of the European Symposium on Artificial Neural Networks (Brussels 1993). M.
Verleysen, Editor, 9-14. D facto, Brussels, Belgium.

Krishnan, T. (1966). "On the Threshold Order of Boolean Functions," IEEE Trans. Elec. Comp., EC-15,
369-372.

Krogh, A. and Hertz, J. A. (1992). "A Simple Weight Decay Can Improve Generalization," in Advances in
Neural Information Processing Systems 4 (Denver 1991), J. E. Moody, S. J. Hanson, and R. P. Lippmann,
Editors, 950-957. Morgan Kaufmann, San Mateo.

Kruschke, J. K. and Movellan, J. R. (1991). "Benefits of Gain: Speeded Learning and Minimal Hidden
Layers in Back-Propagation Networks," IEEE Transactions on System, Man, and Cybernetics, SMC-21(1),
273-280.

Kuczewski, R. M., Myers, M. H., and Crawford, W. J. (1987). "Exploration of Backward Error Propagation
as a Self-Organizational Structure," IEEE International Conference on Neural Networks (San Diego 1987),
M. Caudill and C. Butler, Editors, vol. II, 89-95. IEEE, New York.

Kufudaki, O. and Horejs, J. (1990). "PAB: Parameters Adapting Backpropagation," Neural Network World,
1, 267-274.

Kühn, R., Bös, S., and van Hemmen, J. L. (1991). "Statistical Mechanics for Networks of Graded Response
Neurons," Phy. Rev. A, 43, 2084-2087.

Kullback, S. (1959). Information Theory and Statistics. Wiley, New York.

Kung, S. Y. (1993). Digital Neural Networks. PTR Prentice-Hall, Englewood Cliffs, New Jersey.

Kuo, T. and Hwang, S. (1993). "A Genetic Algorithm with Disruptive Selection," Proceedings of the Fifth
International Conference on Genetic Algorithms (Urbana-Champaign 1993), S. Forrest, Editor, 65-69.
Morgan Kaufmann, San Mateo.

K rková, V. (1992). "Kolmogorov's Theorem and Multilayer Neural Networks," Neural Networks, 5(3),
501-506.

Kushner, H. J. (1977). "Convergence of Recursive Adaptive and Identification Procedures Via Weak
Convergence Theory," IEEE Trans. Automatic Control, AC-22(6), 921-930.

Kushner, H. J. and Clark, D. (1978). Stochastic Approximation Methods for Constrained and
Unconstrained Systems. Springer, New York.

Lane, S. H., Handelman, D. A., and Gelfand, J. J. (1992). "Theory and Development of Higher Order
CMAC Neural Networks," IEEE Control Systems Magazine, April 1992, 23-30.
Lang, K. J. and Witbrock, M. J. (1989). "Learning to Tell Two Spirals Apart," Proceedings of the 1988
Connectionists Models Summer Schools (Pittsburgh 1988), D. Touretzky, G. Hinton, and T. Sejnowski,
Editors, 52-59. Morgan Kaufmann, San Mateo.

Lapedes, A. S. and Farber, R. (1987). "Nonlinear Signal Processing Using Neural Networks: Prediction and
System Modeling," Technical Report, Los Alamos National Laboratory, Los Alamos, New Mexico.

Lapedes, A. and Farber, R. (1988). "How Neural Networks Works," in Neural Information Processing
Systems (Denver 1987), D. Z. Anderson, Editor, 442-456. American Institute of Physics, New York.

Lapidus, L. E., Shapiro, E., Shapiro, S., and Stillman, R. E. (1961). "Optimization of Process Performance,"
AICHE Journal, 7, 288-294.

Lawler, E. L. and Wood, D. E. (1966). "Branch-and-bound methods: A Survey," Operations Research,


14(4). 699-719.

Lay, S.-R. and Hwang, J.-N. (1993). "Robust Construction of Radial Basis Function Networks for
Classification," in Proceedings of the IEEE International Conference on Neural Networks (San Francisco
1993), vol. III, 1859-1864. IEEE, New York.

Le Cun, Y., Boser, B., Denker, J. S., Henderson, D., Howard, R. E., Hubbard, W., and Jackel, L. D. (1989).
"Backpropagation Applied to Handwritten Zip Code Recognition," Neural Computation, 1(4), 541-551.

Le Cun, Y., Boser, B., Denker, J. S., Henderson, D., Howard, R. E., Hubbard, W., and Jackel, L. D. (1990).
"Handwritten Digit Recognition with a Backpropagation Network," in Advances in Neural Information
Processing Systems 2 (Denver 1989), D. S. Touretzky, Editor, 396-404. Morgan Kaufmann, San Mateo.

Le Cun, Y., Kanter, I., and Solla, S. A. (1991a). "Second Order Properties of Error Surfaces: Learning Time
and Generalization," in Advances in Neural Information Processing Systems 3 (Denver 1990), R. P.
Lippmann, J. E. Moody, and D. S. Touretzky, Editors, 918-924. Morgan Kaufmann, San Mateo.

Le Cun, Y., Kanter, I., and Solla, S. A. (1991b). "Eigenvalues of Covariance Matrices: Application to
Neural Network Learning," Phys. Rev. Lett., 66, 2396-2399.

Le Cun, Y., Simard, P. Y., and Pearlmutter, B. (1993). "Automatic Learning Rate Maximization by On-
Line Estimation of the Hessian's Eigenvectors," in Advances in Neural Information Processing Systems 5
(Denver 1992), S. J. Hanson, J. D. Cowan, and C. L. Giles, Editors, 156-163. Morgan Kaufmann, San
Mateo.

Lee, B. W. and Shen, B. J. (1991). "Hardware Annealing in Electronic Neural Networks," IEEE
Transactions on Circuits and Systems, 38, 134-137.

Lee, B. W. and Sheu, B. J. (1993). "Parallel Hardware Annealing for Optimal Solutions on Electronic
Neural Networks," IEEE Transactions on Neural Networks, 4(4), 588-599.

Lee, S. and Kil, R. (1988). "Multilayer Feedforward Potential Function Networks," in Proceedings of the
IEEE Second International Conference on Neural Networks (San Diego 1988), vol. I, 161-171. IEEE, New
York.

Lee, Y. (1991). "Handwritten Digit Recognition Using k-Nearest Neighbor, Radial-Basis Functions, and
Backpropagation Neural Networks," Neural Computation, 3(3), 440-449.

Lee, Y. and Lippmann, R. P. (1990). "Practical Characteristics of Neural Networks and Conventional
Pattern Classifiers on Artificial and Speech Problems," in Advances in Neural Information Processing
Systems 2 (Denver 1989), D. S. Touretzky, Editor, 168-177. Morgan Kaufmann, San Mateo.
Lee, Y., Oh, S.-H., and Kim, M. W. (1991). "The Effect of Initial Weights on Premature Saturation in
Back-Propagation Learning," in International Joint Conference on Neural Networks (Seattle 1991), vol. I,
765-770. IEEE, New York.

von Lehman, Paek, E. G., Liao, P. F., Marrakchi, A., and Patel, J. S. (1988). "Factors Influencing Learning
by Back-Propagation," in IEEE International Conference on Neural Networks (San Diego 1988), vol. I,
335-341. IEEE, New York.

Leshno, M., Lin, V. Y., Pinkus, A., and Schocken, S. (1993). "Multilayer Feedforward Networks with a
Nonpolynomial Activation Function Can Approximate Any Function," Neural Networks, 6(6), 861-867.

Leung, C. S. and Cheung, K. F. (1991). "Householder Encoding for Discrete Bidirectional Associative
Memory," in Proc. Int. Conference on Neural Networks (Singapore 1991), 237-241.

Levin, A. V. and Narendra, K. S. (1992). "Control of Nonlinear Dynamical Systems Using Neural
Networks, Part II: Observability and Identification," Technical Report 9116, Center for Systems Science,
Yale Univ., New Haven, CT.

Lewis II, P. M. and Coates, C. L. (1967). Threshold Logic. John Wiley, New York, NY.

Light, W. A. (1992a). "Ridge Functions, Sigmoidal Functions and Neural Networks," in Approximation
Theory VII, E. W. Cheney, C. K. Chui, and L. L. Schumaker, Editors, 163-206. Academic Press, Boston.

Light, W. A. (1992b). "Some Aspects of Radial Basis Function Approximation," in Approximation Theory,
Spline Functions, and Applications, S. P. Singh, Editor, NATO ASI Series, 256, 163-190. Klawer
Academic Publishers, Boston, MA.

Ligthart, M. M., Aarts, E. H. L., and Beenker, F. P. M. (1986). "Design-for-Testability of PLA's Using
Statistical Cooling," in Proc. ACM/IEEE 23rd Design Automation Conference (Las Vegas 1986), 339-345.

Lin, J.-N. and Unbehauen, R. (1993). "On the Realization of a Kolmogorov Network," Neural
Computation, 5(1), 21-31.

Linde, Y., Buzo, A., and Gray, R. M. (1980). "An Algorithm for Vector Quantizer Design," IEEE Trans. on
Communications, COM-28, 84-95.

Linsker, R. (1986). "From Basic Network Principles to Neural Architecture," Proceedings of the National
Academy of Sciences, USA, 83, 7508-7512, 8390-8394, 8779-8783.

Linsker, R. (1988). "Self-Organization in a Perceptual Network," Computer, March 1988, 105-117.

Lippmann, R. P. (1987). "An Introduction to Computing with Neural Nets," IEEE Magazine on Accoustics,
Signal, and Speech Processing (April), 4, 4-22.

Lippmann, R. P. (1989). "Review of Neural Networks for Speech Recognition," Neural Computation, 1(1),
1-38.

Little, W. A. (1974). "The Existence of Persistent States in the Brain," Math Biosci., 19, 101-120.

Ljung, L. (1977). "Analysis of Recursive Stochastic Algorithms," IEEE Trans. on Automatic Control, AC-
22(4), 551-575.

Ljung, L. (1978). "Strong Convergence of Stochastic Approximation Algorithm," Annals of Statistics, 6(3),
680-696.
Lo, Z.-P., Yu, Y., and Bavarian, B. (1993). "Analysis of the Convergence Properties of Topology
Preserving Neural Networks," IEEE Transactions on Neural Networks, 4(2), 207-220.

Loève, M. (1963). Probability Theory, 3rd edition, Van Nostrand, New York.

Logar, A. M., Corwin, E. M., and Oldham, W. J. B. (1993). "A Comparison of Recurrent Neural Network
Learning Algorithms," in Proceedings of the IEEE International Conference on Neural Networks (San
Francisco 1993), vol. II, 1129-1134. IEEE, New York.

Luenberger, D. G. (1969). Optimization by Vector Space Methods. John Wiley, New York, NY.

Macchi, O. and Eweda, E. (1983). "Second-Order Convergence Analysis of Stochastic Adaptive Linear
Filtering," IEEE Trans. Automatic Control, AC-28(1), 76-85.

Mackey, D. J. C. and Glass, L. (1977). "Oscillation and Chaos in Physiological Control Systems," Science,
197, 287-289.

MacQueen, J. (1967). "Some Methods for Classification and Analysis of Multivariate Observations," in
Proceedings of the Fifth Berkeley Symposium on Mathematics, Statistics, and Probability, L. M. LeCam
and J. Neyman, Editors, 281-297. University of California Press, Berkeley.

Magnus, J. R. and Neudecker, H. (1988). Matrix Differential Calculus with Applications in Statistics and
Econometrics. Wiley, Chichester.

Makram-Ebeid, S., Sirat, J.-A., and Viala, J.-R. (1989). "A Rationalized Back-Propagation Learning
Algorithm," in International Joint Conference on Neural Networks (Washington 1989), vol. II, 373-380.
IEEE, New York.

von der Malsberg, C. (1973). "Self-Organizing of Orientation Sensitive Cells in the Striate Cortex,"
Kybernetick, 14, 85-100.

Mano, M. M. (1979). Digital Logic and Computer Design, Prentice-Hall, Englewood Cliffs, NJ.

Mao, J. and Jain, A. K. (1993). "Regularization Techniques in Artificial Neural Networks," in Proc. World
Congress on Neural Networks (Portland 1993), vol. IV, 75-79. LEA, Hillsdale.

Marchand, M., Golea, M., and Rujan, P. (1990). "A Convergence Theorem for Sequential Learning in Two-
Layer Perceptrons," Europhysics Letters, 11, 487-492.

Marcus, C. M. and Westervelt, R. M. (1989). "Dynamics of Iterated-Map Neural Networks," Physical


Review A, 40(1), 501-504.

Marcus, C. M., Waugh, F. R., and Westervelt, R. M. (1990). "Associative Memory in an Analog Iterated-
Map Neural Network," Physical Review A, 41(6), 3355-3364.

Marr, D. (1969). "A Theory of Cerebellar Cortex," J. Physiol. (London), 202, 437-470.

Martin, G. L. (1990). "Integrating Segmentation and Recognition Stages for Overlapping Handprinted
Characters," MCC Tech. Rep. ACT-NN-320-90, Austin, Texas.

Martin, G. L. (1993). "Centered-Object Integrated Segmentation and Recognition of Overlapping


Handprinted Characters," Neural Networks, 5(3), 419-429.

Martin, G. L., and Pittman, J. A. (1991). "Recognizing Hand-Printed Letters and Digits Using
Backpropagation Learning," Neural Computation, 3(2), 258-267.
Mays, C. H. (1964). "Effects of Adaptation Parameters on Convergence Time and Tolerance for Adaptive
Threshold Elements," IEEE Trans. Elec. Comp., EC-13, 465-468.

McCulloch, J. L. and Pitts, W. (1943). "A Logical Calculus of Ideas Immanent in Nervous Activity,"
Bulletin of Mathematical Biophysics, 5, 115-133.

McEliece, R. J., Posner, E. C., Rodemich, E. R., and Venkatesh, S. S. (1987). "The Capacity of the
Hopfield Associative Memory," IEEE Trans. Info. Theory, IT-33, 461-482.

McInerny, J. M., Haines, K. G., Biafore, S., and Hecht-Nielsen, R. (1989). "Backpropagation Error
Surfaces Can Have Local Minima," in International Joint Conference on Neural Networks (Washington
1989), vol. II, 627. IEEE, New York.

Mead, C. (1991). "Neuromorphic Electronic Systems," Aerospace and Defense Science, 10(2), 20-28.

Medgassy, P. (1961). Decomposition of Superposition of Distributed Functions. Hungarian Academy of


Sciences, Budapest.

Megiddo, N. (1986). "On the Complexity of Polyhedral Separability," Tech. Rep. RJ 5252, IBM Almaden
Research Center, San Jose, CA.

Mel, B. W. and Omohundro, S. M. (1991). "How Receptive Field Parameters Affect Neural Learning," in
Advances in Neural Information Processing Systems 3 (Denver 1990), R. P. Lippmann, J. E. Moody, And
D. S. Touretzky, Editors, 757-763. Morgan Kaufmann, San Mateo.

Metropolis, N., Rosenbluth, A., Teller, A., and Teller, E. (1953). "Equation of State Calculations by Fast
Computing Machines," J. Chemical Physics, 21(6), 1087-1092.

Mézard, M. and Nadal, J.-P. (1989). "Learning in Feedforward Layered Networks: The Tiling Algorithm,"
Journal of Physics A, 22, 2191-2204.

Micchelli, C. A. (1986). "Interpolation of Scattered Data: Distance and Conditionally Positive Definite
Functions," Constructive Approximation, 2, 11-22.

Miller, G. F., Todd, P. M., and Hedg, S. U. (1989). "Designing Neural Networks Using Genetic
Algorithms," in Proceedings of the Third International Conference on Genetic Algorithms (Arlington
1989), J. D. Schaffer, Editor, 379-384. Morgan Kaufmann, San Mateo.

Miller, W. T., Sutton, R. S., and Werbos, P. J., Editors (1990a). Neural Networks for Control. MIT Press,
Cambridge.

Miller, W. T., Box, B. A., and Whitney, E. C. (1990b). "Design and Implementation of a High Speed
CMAC Neural Network Using Programmable CMOS Logic Cell Arrays," Report No. ECE.IS.90.01,
University of New Hampshire.

Miller, W. T., Glanz, F. H., and Kraft, L. G. (1990c). "CMAC: An Associative Neural Network Alternative
to Backpropagation," Proc. IEEE, 78(10), 1561-1657.

Miller, W. T., Hewes, R. P., Glanz, F. H., and Kraft, L. G. (1990d). "Real-Time Dynamic Control of an
Industrial Manipulator Using a Neural-Network-Based Learning Controller," IEEE Trans. Robotics
Automation, 6, 1-9.

Minsky, M. and Papert, S. (1969). Perceptrons: An Introduction to Computational Geometry. MIT Press,
Cambridge, MA.
Møller, M. F. (1990). "A Scaled Conjugate Gradient Algorithm for Fast Supervised Learning," Technical
Report PB-339, Computer Science Department, University of Aarhus, Aarhus, Denmark.

Montana, D. J. and Davis, L. (1989). "Training Feedforward Networks Using Genetic Algorithms," in
Eleventh International Joint Conference on Artificial Intelligence (Detroit 1989), N. S. Sridhara, Editor,
762-767. Morgan Kaufmann, San Mateo.

Moody, J. (1989). "Fast Learning in Multi-Resolution Hierarchies," in Advances in Neural Information


Processing Systems I (Denver 1988), D. S. Touretzky, Editor, 29-39. Morgan Kaufmann, San Mateo.

Moody, J. and Darken, C. (1989a). "Learning with Localized Receptive Fields," in Proceedings of the 1988
Connectionist Models Summer School (Pittsburgh 1988), D. Touretzky, G. Hinton, and T. Sejnowski,
Editors, 133-143. Morgan Kaufmann, San Mateo.

Moody, J. and Darken, C. (1989b). "Fast Learning in Networks of Locally-Tuned Processing Units,"
Neural Computation, 1(2), 281-294.

Moody, J. and Yarvin, N. (1992). "Networks with Learned Unit Response Functions," in Advances in
Neural Information Processing Systems 4 (Denver 1991), J. E. Moody, S. J. Hanson, and R. P. Lippmann,
Editors, 1048-1055. Morgan Kaufmann, San Mateo.

Moore, B. (1989). "ART1 and Pattern Clustering," in Proceedings of the 1988 Connectionists Models
Summer Schools (Pittsburgh 1988), D. Touretzky, G. Hinton, and T. Sejnowski, Editors, 174-185. Morgan
Kaufmann, San Mateo.

Morgan, N. and Bourlard, H. (1990). "Generalization and Parameter Estimation in Feedforward Nets: Some
Experiments," in Advances in Neural Information Processing Systems 2 (Denver 1989), D. S. Touretzky,
Editor, 630-637. Morgan Kaufmann, San Mateo.

Morita, M. (1993). "Associative Memory with Nonmonotone Dynamics," Neural Networks, 6(1), 115-126.

Morita, M., Yoshizawa, S., and Nakano, K. (1990a). "Analysis and Improvement of the Dynamics of
Autocorrelation Associative Memory," Trans. Institute Electronics, Information and Communication
Engrs., J73-D-III(2), 232-242.

Morita, M., Yoshizawa, S., and Nakano, K. (1990b). "Memory of Correlated Patterns by Associative Neural
Networks with Improved Dynamics," in Proc. INNC '90, Paris, vol. 2, 868-871.

Mosteller, F. and Tukey, J. (1980). Robust Estimation Procedures. Addison-Wesley, New York.

Mosteller, F., Rourke, R. E., and Thomas Jr., G. B. (1970). Probability with Statistical Applications, 2nd
edition. Addison-Wesley, Reading, MA.

Mukhopadhyay, S., Roy, A., Kim, L. S., and Govil, S. (1993). "A Polynomial Time Algorithm for
Generating Neural Networks for Pattern Classification: Its Stability Properties and Some Test Results,"
Neural Computation, 5(2), 317-330.

Muroga, S. (1959). "The Principle of Majority Decision Logical Elements and the Complexity of their
Circuits," Proc. Int. Conf. on Information Processing, Paris, 400-407.

Muroga, S. (1965). "Lower Bounds of the Number of Threshold Functions and a Maximum Weight," IEEE
Trans. Elect. Comp., EC-14(2), 136-148.

Muroga, S. (1971). Threshold Logic and its Applications. John Wiley Interscience, New York, NY.
Musavi, M. T., Ahmed, W., Chan, K. H., Faris, K. B., and Hummels, D. M. (1992). "On the Training of
Radial Basis Function Classifiers," Neural Networks, 5(4), 595-603.

Nakano, K. (1972). "Associatron: A Model of Associative Memory," IEEE Trans. Sys. Man Cybern., SMC-
2, 380-388.

Narayan, S. (1993). "ExpoNet: A Generalization of the Multi-Layer Perceptron Model," in Proc. World
Congress on Neural Networks (Portland 1993), vol. III, 494-497. LEA, Hillsdale.

Narendra, K. S. and Parthasarathy, K. (1990). "Identification and Control of Dynamical Systems Using
Neural Networks," IEEE Trans. Neural Networks, 1(1), 4-27.

Narendra, K. S. and Wakatsuki, K. (1991). "A Comparative Study of Two Neural Network Architectures
for the Identification and Control of Nonlinear Dynamical Systems," Technical Report, Center for Systems
Science, Yale University, New Haven, CT.

Nerrand, O., Roussel-Ragot, P., Personnaz, L., Dreyfus, G., and Marcos, S. (1993). "Neural Networks and
Nonlinear Adaptive Filtering: Unifying Concepts and New Algorithms," Neural Computation, 5(2), 165-
199.

Newman, C. (1988). "Memory Capacity in Neural Network Models: Rigorous Lower Bounds," Neural
Networks, 3(2), 223-239.

Nguyen, D. and Widrow, B. (1989). "The Truck Backer-Upper: An Example of Self-Learning in Neural
Networks," in Proceedings of the International Joint Conference on Neural Networks (Washington, DC
1989), vol. II, 357-362.

Nilsson, N. J. (1965). Learning Machines. McGraw-Hill, New York. Reissued as The Mathematical
Foundations of Learning Machines. Morgan Kaufmann, San Mateo, CA, 1990.

Niranjan, M. and Fallside, F. (1988). "Neural Networks and Radial Basis Functions in Classifying Static
Speech Patterns," Technical Report CUEDIF-INFENG17R22, Engineering Department, Cambridge
University.

Nishimori, H. and Opri , I. (1993). "Retrieval Process of an Associative Memory with a General Input-
Output Function," Neural Networks, 6(8), 1061-1067.

Nolfi, S., Elman, J. L., and Parisi, D. (1990). "Learning and Evolution in Neural Networks," CRL Technical
Report 9019, University of California, San Diego.

Novikoff, A. B. J. (1962). "On Convergence Proofs of Perceptrons," Proc. Symp. on Math. Theory of
Automata (Polytechnic Institute of Brooklyn, Brooklyn, NY.), 615-622

Nowlan, S. J. (1988). "Gain Variation in Recurrent Error Propagation Networks," Complex Systems, 2, 305-
320.

Nowlan, S. J. (1990). "Maximum Likelihood Competitive Learning," in Advances in Neural Information


Processing Systems 2 (Denver 1989). D. Touretzky, Editor, 574-582. Morgan Kaufmann, San Mateo.

Nowlan, S. J. and Hinton, G. E. (1992a). "Adaptive Soft Weight Tying using Gaussian Mixtures," in
Advance in Neural Information Processing Systems 4 (Denver, 1991), J. E. Moody, S. J. Hanson, and R. P.
Lippmann, Editors, 993-1000. Morgan Kaufmann, San Mateo.
Nowlan, S. J., and Hinton, G. E. (1992b). "Simplifying Neural Networks by Soft Weight-Sharing," Neural
Computation, 4(4), 473-493.

Oja, E. (1982). "A Simplified Neuron Model As a Principal Component Analyzer," Journal of
Mathematical Biology, 15, 267-273.

Oja, E. (1983). Subspace Methods of Pattern Recognition. Research Studies Press and John Wiley.
Letchworth, England.

Oja, E. (1989). "Neural Networks, Principal Components, and Subspaces," International Journal of Neural
Systems, 1(1), 61-68.

Oja, E. (1991). "Data Compression, Feature Extraction, and Autoassociation in Feedforward Neural
Networks," Artificial Neural Networks, Proceedings of the 1991 International Conference on Artificial
Neural Networks (Espoo 1991), T. Kohonen, K. Mäkisara, O. Simula, and J. Kangas, Editors, vol. I, 737-
745. Elsevier Science Publishers B. V., Amsterdam.

Oja, E. and Karhunen, J. (1985). "On Stochastic Approximation of the Eigenvectors of the Expectation of a
Random Matrix," Journal of Mathematical Analysis and Applications, 106, 69-84.

Okajima, K., Tanaka, S., and Fujiwara, S. (1987). "A Heteroassociative Memory Network with Feedback
Connection," in Proc. IEEE First International Conference on Neural Networks (San Diego 1987), M.
Caudill & C. Butler, Editors, vol. II, 711-718.

Paek, E. G. and Psaltis, D. (1987). "Optical Associative Memory Using Fourier Transform Holograms,"
Optical Engineering, 26, 428-433.

Pao, Y. H. (1989). Adaptive Pattern Recognition and Neural Networks. Addison-Wesley, Reading, MA.

Papadimitriou, C. H. and Steiglitz (1982). Combinatorial Optimization: Algorithms and Complexity.


Prentice-Hall, Englewood Cliffs.

Park, J. and Sandberg, I. W. (1991). "Universal Approximation Using Radial-Basis-Function Networks,"


Neural Computation, 3(2), 246-257.

Park, J. and Sandberg, I. W. (1993). "Approximation and Radial-Basis-Function Networks," Neural


Computation, 5(2), 305-316.

Parker, D. B. (1985). "Learning Logic," Technical Report TR-47, Center for Computational Research in
Economics and Management Science, Massachusetts Institute of Technology, Cambridge, MA.

Parker, D. B. (1987). "Optimal Algorithms for Adaptive Networks: Second Order Backprop, Second Order
Direct Propagation, and Second Order Hebbian Learning," in IEEE First International Conference on
Neural Networks (San Diego 1987), M. Caudill and C. Butler, Editors, vol. II, 593-600. IEEE, New York.

Parks, M. (1987). "Characterization of the Boltzmann Machine Learning Rate," in IEEE First International
Conference on Neural Networks (San Diego 1987), M. Caudill and C. Butler, vol. III, 715-719. New York,
IEEE.

Parks, P. C. and Militzer, J. (1991). "Improved Allocation of Weights for Associative Memory Storage in
Learning Control Systems, Proceedings of the 1st IFAC Symposium on Design Methods of Control Systems,
col. II, 777-782. Pergamon Press, Zurich.

Parzen, E. (1962). "On Estimation of a Probability Density Function and Mode," Ann. Math. Statist., 33,
1065-1076.
Pearlmutter, B. A. (1988). "Learning State Space Trajectories in Recurrent Neural Networks," Technical
Report CMU-CS-88-191, School of Computer Science, Carnegie Mellon University, Pittsburgh, PA.

Pearlmutter, B. A. (1989a). "Learning State Space Trajectories in Recurrent Neural Networks," in


International Joint Conference on Neural Networks (Washington 1989), vol. II, 365-372. IEEE, New York.

Pearlmutter, B. A. (1989b). "Learning State Space Trajectories in Recurrent Neural Networks," Neural
Computation, 1(2), 263-269.

Penrose, R. (1955). "A Generalized Inverse for Matrices," Proc. Cambridge Philosophical Society, 51, 406-
413.

Peretto, P. (1984). "Collective Properties of Neural Networks: A Statistical Physics Approach," Biological
Cybernetics, 50, 51-62.

Personnaz, L., Guyon, I., and Dreyfus, G. (1986). "Collective Computational Properties of Neural
Networks: New Learning Mechanisms," Physical Review A, 34(5), 4217-4227.

Peterson, C. and Anderson, J. R. (1987). "A Mean Field Theory Learning Algorithm for Neural Networks,"
Complex Systems, 1, 995-1019.

Peterson, G. E. and Barney, H. L. (1952). "Control Methods used in a Study of the Vowels," Journal of the
Acoustical Society of America, 24(2), 175-184.

Pflug, G. Ch. (1990). "Non-Asymptotic Confidence Bounds for Stochastic Approximation Algorithms with
Constant Step Size," Mathematik, 110, 297-314.

Pineda, D. A. (1988). "Dynamics and Architectures for Neural Computation," Journal of Complexity, 4,
216-245.

Pineda, F. J. (1987). "Generalization of Back-Propagation to Recurrent Neural Networks," Physical Review


Letters, 59, 2229-2232.

Platt, J. (1991). "A Resource-Allocating Network for Function Interpolation," Neural Computation, 3(2),
213-225.

Plaut, D. S., Nowlan, S., and Hinton, G. (1986). "Experiments on Learning by Back Propagation,"
Technical Report CMU-CS-86-126, Department of Computer Science, Carnegie Mellon University,
Pittsburgh, PA.

Poggio, T. and Girosi, F. (1989). "A Theory of Networks for Approximation and Learning," A. I. Memo
1140, M.I.T., Cambridge, MA.

Poggio, T. and Girosi, F. (1990a). "Networks for Approximation and Learning," Proceedings of the IEEE,
78(9), 1481-1497.

Poggio, T. and Girosi, F. (1990b). "Regularization Algorithms for Learning that are Equivalent to
Multilayer Networks," Science, 247, 978-982.

Polak, E. and Ribiére, G. (1969). "Note sur la Convergence de Methods de Directions Conjugées," Revue
Francaise d'Informatique et Recherche Operationnalle, 3, 35-43.

Polyak, B. T. (1987). Introduction to Optimization. Optimization Software, Inc., New York.

Polyak, B. T. (1990). "New Method of Stochastic Approximation Type," Automat. Remote Control, 51,
937-946.
Pomerleau, D. A. (1991). "Efficient Training of Artificial Neural Networks for Autonomous Navigation,"
Neural Computation, 3(1), 88-97.

Pomerleau, D. A. (1993). Neural Network Perception for Mobile Robot Guidance. Kluwer, Boston.

Powell, M. J. D. (1987). "Radial Basis Functions for Multivariate Interpolation: A Review," in Algorithms
for the Approximation of Functions and Data, J. C. Mason and M. G. Cox, Editors, Clarendon Press,
Oxford.

Press, W. H., Flannery, B. P., Teukolsky, S. A. and Vetterling, W. T. (1986). Numerical Recipes: The Art
of Scientific Computing. Cambridge University Press, Cambridge.

Psaltis, D. and Park, C. H. (1986). "Nonlinear Discriminant Functions and Associative Memories," in
Neural Networks for Computing, J. S. Denker, Editor, Proc. American Inst. Physics, vol. 151, 370-375.

Qi, X. and Palmieri, F. (1993). "The Diversification Role of Crossover in the Genetic Algorithms,"
Proceedings of the Fifth International Conference on Genetic Algorithms (Urbana-Champaign 1993), S.
Forrest, Editor, 132-137. Morgan Kaufmann, San Mateo.

Qian, N. and Sejnowski, T. (1989). "Learning to Solve Random-Dot Stereograms of Dense Transparent
Surfaces with Recurrent Back-Propagation," in Proceedings of the 1988 Connectionist Models Summer
School (Pittsburgh 1988), D. Touretzky, G. Hinton, and T. Sejnowski, Editors, 435-443. Morgan
Kaufmann, San Mateo.

Rao, C. R. and Mitra, S. K. (1971). Generalized Inverse of Matrices and its Applications. John Wiley, New
York.

Reed, R. (1993). "Pruning Algorithms - A Survey," IEEE Trans. Neural Networks, 4(5), 740-747.

Reeves, C. R. (1993). "Using Genetic Algorithms with Small Populations," Proceedings of the Fifth
International Conference on Genetic Algorithms (Urbana-Champaign 1993), S. Forrest, Editor, 92-99.
Morgan Kaufmann, San Mateo.

Reilly, D. L. and Cooper, L. N. (1990). "An Overview of Neural Networks: Early Models to Real World
Systems," in An Introduction to Neural and Electronic Networks, S. F. Zornetzer, J. L. Davis, and C. Lau,
Editors. Academic Press, San Diego.

Reilly, D. L., Cooper, L. N., and Elbaum, C. (1982). "A Neural Model for Category Learning," Biological
Cybernetics, 45, 35-41.

Rezgui, A. and Tepedelenlioglu, N. (1990). "The Effect of the Slope of the Activation Function on the
Backpropagation Algorithm," in Proceedings of the International Joint Conference on Neural Networks
(Washington, DC 1990), M. Caudill, Editor, vol. I, 707-710. IEEE, New York.

Ricotti, L. P., Ragazzini, S., and Martinelli, G. (1988). "Learning of Word Stress in a Sub-Optimal Second
Order Back-Propagation Neural Network," in IEEE First International Conference on Neural Networks
(San Diego 1987), M. Caudill and C. Butler, Editors, vol. I, 355-361. IEEE, New York.

Ridgway III, W. C. (1962). "An Adaptive Logic System with Generalizing Properties," Technical Report
1556-1, Stanford Electronics Labs., Stanford University, Stanford, CA.

Riedel, H. and Schild, D. (1992). "The Dynamics of Hebbian Synapses can be Stabilized by a Nonlinear
Decay Term," Neural Networks, 5(3), 459-463.

Ritter, H. and Schulten, K. (1986). "On the Stationary State of Kohonen's Self-Organizing Sensory
Mapping," Biol. Cybernetics, 54, 99-106.
Ritter, H. and Schulten, K. (1988a). "Kohonen's Self-Organizing Maps: Exploring Their Computational
Capabilities," in IEEE International Conference on Neural Networks (San Diego 1988), vol. I, 109-116.
IEEE, New York.

Ritter, H. and Schulten, K. (1988b). "Convergence Properties of Kohonen's Topology Conserving Maps:
Fluctuations, Stability, and Dimension Selection," Biol. Cybernetics, 60, 59-71.

Robinson, A. J. and Fallside, F. (1988). "Static and Dynamic Error Propagation Networks with Application
to Speech Coding," in Neural Information Processing Systems (Denver 1987), D. Z. Anderson, Editor, 632-
541. American Institute of Physics, New York.

Robinson, A. J., Niranjan, M., and Fallside, F. (1989). "Generalizing the Nodes of the Error Propagation
Network," (abstract) in Proc. Int. Joint Conference on Neural Networks (Washington, D. C. 1989), vol. II,
582. IEEE, New York. Also, printed as Technical Report CUED/F-INFENG/TR.25, Cambridge University,
Engineering Department, Cambridge, England.

Rohwer, R. (1990). "The 'Moving Targets' Training Algorithm," in Advances in Information Processing
Systems 2 (Denver 1989), D. S. Touretzky, Editor, 558-565. Morgan Kaufmann, San Mateo.

Romeo, F. I. (1989). Simulated Annealing: Theory and Applications to Layout Problems. Ph.D. Thesis,
Memorandum UCB/ERL-M89/29, University of California at Berkeley. Berkeley, CA.

Rosenblatt, F. (1961). Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms.
Spartan Press, Washington, D. C.

Rosenblatt, F. (1962). Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms.
Spartan Books, Washington, DC.

Roy, A. and Govil, S. (1993). "Generating Radial Basis Function Net in Polynomial Time for
Classification," in Proc. World Congress on Neural Networks (Portland 1993), vol. III, 536-539. LEA,
Hillsdale.

Roy, A. and Mukhopadhyay, S. (1991). "Pattern Classification Using Linear Programming," ORSA J.
Comput., 3(1), 66-80.

Roy, A., Kim, L. S., and Mukhopadhyay, S. (1993). "A Polynomial Time Algorithm for the Construction
and Training of a Class of Multilayer Perceptrons," Neural Networks, 6(4), 535-545.

Rozonoer, L. I. (1969). "Random Logic Nets, I," Automat. Telemekh., 5, 137-147.

Rubner, J. and Tavan, P. (1989). "A Self-Organizing Network for Principal-Component Analysis,"
Europhysics Letters, 10, 693-698.

Rudin, W. (1964). Principles of Mathematical Analysis. McGraw-Hill, New York, NY.

Rumelhart, D. E. (1989). "Learning and Generalization in Multilayer Networks," presentation given at the
NATO Advanced Research Workshop on Neuro Computing, Architecture, and Applications (Les Arcs,
France 1989).

Rumelhart, D. E. and Zipser, D. (1985). "Feature Discovery By Competitive Learning," Cognitive Science,
9, 75-112.

Rumelhart, D. E., McClelland, J. L. and the PDP Research Group. (1986a). Parallel Distributed
Processing: Exploration in the Microstructure of Cognition, volume 1, MIT Press, Cambridge.
Rumelhart, D. E., Hinton, G. E., and Williams, R. J. (1986b). "Learning Internal Representations by Error
Propagation," in Parallel Distributed Processing: Explorations in the Microstructure of Cognition, vol. I,
D. E. Rumelhart, J. L. McClelland, and the PDP Research Group. MIT Press, Cambridge (1986).

Rutenbar, R. A. (1989). "Simulated Annealing Algorithms: An Overview," IEEE Circuits Devices


Magazine, 5(1). 19-26.

Saha, A. and Keeler, J. D. (1990). "Algorithms for Better Representation and Faster Learning in Radial
Basis Function Networks," in Advances in Neural Information Processing Systems 2 (Denver 1989), D.
Touretzky, Editor, 482-489. Morgan Kaufmann, San Mateo.

Salamon, P., Nulton, J. D., Robinson, J., Petersen, J., Ruppeiner, G., and Liao, L. (1988). "Simulated
Annealing with Constant Thermodynamic Speed," Computer Physics Communications, 49, 423-428.

Sanger, T. D. (1989). "Optimal Unsupervised Learning in a Single Layer Linear Feedforward Neural
Network," Neural Networks, 2(6), 459-473.

Sato, M. (1990). "A Real Time Learning Algorithm for Recurrent Analog Neural Networks," Biological
Cybernetics, 62, 237-241.

Sayeh, M. R. and Han, J. Y. (1987). "Pattern Recognition Using a Neural Network," Proc. SPIE, Intelligent
Robots and Computer Vision, 848, 281-285.

Schaffer, J. D., Caruana, R. A., Eshelman, L. J., and Das, R. (1989). "A Study of Control Parameters
Affecting Online Performance of Genetic Algorithms for Function Optimization," in Proceedings of the
Third International Conference on Genetic Algorithms and their Applications (Arlington 1989), J. D.
Schaffer, Editor, 51-60. Morgan Kaufmann, San Mateo.

Schoen, F. (1991). "Stochastic Techniques for Global Optimization: A Survey of Recent Advances,"
Journal of Global Optimization, 1, 207-228.

Schultz, D. G. and Gibson, J. E. (1962). "The Variable Gradient Method for Generating Liapunov
Functions," Trans. IEE, 81(II), 203-210.

Schumaker, L. L. (1981). Spline Functions: Basic Theory. Wiley, New York.

Schwartz, D. B., Samalam, V. K., Solla, S. A., and Denker, J. S. (1990). "Exhaustive Learning," Neural
Computation, 2(3), 374-385.

Scofield, C. L., Reilly, D. L., Elbaum, C., and Cooper, L. N. (1988). "Pattern Class Degeneracy in an
Unrestricted Storage Density Memory," in Neural Information Processing Systems (Denver 1987), D. Z.
Anderson, Editor, 674-682. American Institute of Physics, New York.

Sejnowski, T. J. and Rosenberg, C. R. (1987). "Parallel Networks that Learn to Pronounce English Text,"
Complex Systems, 1, 145-168.

Sejnowski, T. J., Kienker, P. k., and Hinton, G. (1986). "Learning Symmetry Groups with Hidden Units:
Beyond the Perceptron," Physica, 22D, 260-275.

Shannon, C. E. (1938). "A Symbolic Analysis of Relay and Switching Circuits," Trans. of the AIEE, 57,
713-723.

Shaw, G. and Vasudevan, R. (1974). "Persistent States of Neural Networks and the Nature of Synaptic
Transmissions," Math. Biosci., 21, 207-218.

Sheng, C. L. (1969). Threshold Logic. Academic Press, New York, NY.


Shiino, M. and Fukai, T. (1990). "Replica-Symmetric Theory of the Nonlinear Analogue Neural Networks,"
J. Phs. A, 23, L1009-L1017.

Shrödinger, E. (1946). Statistical Thermodynamics. Cambridge University Press, London.

Sietsma, J. and Dow, R. J. F. (1988). "Neural Net Pruning - Why and How," in IEEE International
Conference on Neural Networks (San Diego 1988), vol. I, 325-333. IEEE, New York.

Silva, F. M. and Almeida, L. B. (1990). "Acceleration Techniques for the Backpropagation Algorithm,"
Neural Networks, Europe Lecture Notes in Computer Science, L. B. Almeida and Wellekens, Editors, 110-
119. Springer-Verlag, Berlin.

Simard, P. Y., Ottaway, M. B., and Ballard, D. H. (1988). "Analysis of Recurrent Backpropagation,"
Technical Report 253, Department of Computer Science, University of Rochester.

Simard, P. Y., Ottaway, M. B., and Ballard, D. H. (1989). "Analysis of Recurrent Backpropagation," in
Proceedings of the 1988 Connectionist Models Summer School (Pittsburgh 1988), D. Touretzky, G. Hinton,
and T. Sejnowski, Editors, 103-112. Morgan Kaufmann, San Mateo.

Simeone, B., Editor (1989). Combinatorial Optimization. Springer-Verlag, New York.

Simpson, P. K. (1990). "Higher-Ordered and Intraconnected Bidirectional Associative Memory, IEEE


Trans. System, Man, and Cybernetics, 20(3), 637-653.

Slansky, J. and Wassel, G. N. (1981). Pattern Classification and Trainable Machines. Springer-Verlag,
New York.

van der Smagt, P. P. (1994). "Minimisation Methods for Training Feedforward Neural Networks," Neural
Networks, 7(1), 1-11.

Smith, J. M. (1987). "When Learning Guides Evolution," Nature, 329, 761-762.

Smolensky, P. (1986). "Information Processing in Dynamical Systems: Foundations of Harmony Theory,"


in Parallel Distributed Processing: Explorations in the Microstructure of Cognition, vol. I, D. E.
Rumelhart, J. L. McClelland, and the PDP Research Group. MIT Press, Cambridge.

Snapp, R. R., Psaltis, D. and Venkatesh, S. S. (1991). "Asymptotic Slowing Down of the Nearest-Neighbor
Classifier," in Advances in Neural Information Processing Systems 3 (Denver 1990), R. P. Lippmann, J. E.
Moody, and D. S. Touretzky, Editors, 932-938. Morgan Kaufmann, San Mateo.

Solla, S. A., Levin, E., and Fleisher, M. (1988). "Accelerated Learning in Layered Neural Networks,"
Complex Systems, 2, 625-639.

Song, J. (1992). "Hybrid Genetic/Gradient Learning in Multi-Layer Artificial Neural Networks," Ph.D.
Dissertation, Department of Electrical and Computer Engineering, Wayne State University, Detroit,
Michigan.

Sontag, E. D. and Sussann, H. J. (1985). "Image Restoration and Segmentation Using Annealing
Algorithm," in Proc. 24th Conference on Decision and Control (Ft. Lauderdale 1985), 768-773.

Soukoulis, C. M., Levin, K., and Grest, G. S. (1983). "Irreversibility and Metastability in Spin-Glasses. I.
Ising Model," Physical Review, B28, 1495-1509.

Specht, D. F. (1990). "Probabilistic Neural Networks," Neural Networks, 3(1), 109-118.


Sperduti, A. and Starita, A. (1991). "Extensions of Generalized Delta Rule to Adapt Sigmoid Functions,"
Proceedings of the 13th Annual International Conference IEEE/EMBS, 1393-1394. IEEE, New York.

Sperduti, A. and Starita, A. (1993). "Speed Up Learning and Networks Optimization with Extended Back
Propagation," Neural Networks, 6(3), 365-383.

Spitzer, A. R., Hassoun, M. H., Wang, C., and Bearden, F. (1990). "Signal Decomposition and Diagnostic
Classification of the Electromyogram Using a Novel Neural Network Technique," in Proc. XIVth Ann.
Symposium on Computer Applications in Medical Care (Washington D. C., 1990), R. A. Miller, Editor,
552-556. IEEE Computer Society Press, Los Alamitos.

Spreecher, D. A. (1993). "A Universal Mapping for Kolmogorov's Superposition Theorem," Neural
Networks, 6(8), 1089-1094.

Stent, G. S. (1973). "A Physiological Mechanism for Hebb's Postulate of Learning," Proceedings of the
National Academy of Sciences (USA), 70, 997-1001.

Stiles, G. S. and Denq, D-L. (1987). "A Quantitative Comparison of Three Discrete Distributed Associative
Memory Models," IEEE Trans. Computers, C-36, 257-263.

Stinchcombe, M. and White, H. (1989). "Universal Approximations Using Feedforward Networks with
Non-Sigmoid Hidden Layer Activation Functions," Proc. Int. Joint Conf. Neural Networks (Washington, D.
C. 1989), vol. I, 613-617. SOS Printing, San Diego.

Stone, M. (1978). "Cross-Validation: A Review," Math. Operationsforsch Statistik, 9, 127-140.

Sudjianto, A. and Hassoun, M. (1994). "Nonlinear Hebbian Rule: A Statistical Interpretation," IEEE
International Conference on Neural Networks, (Orlando 1994), vol. XXX, XXXpage numbersXXX, IEEE
Press.

Sun, G.-Z., Chen, H.-H., and Lee, Y.-C. (1992). "Green's Function Method for Fast On-Line Learning
Algorithm of Recurrent Neural Networks," in Advances in Neural Information Processing 4 (Denver 1991),
J. E. Moody, S. J. Hanson, and R. P. Lippmann, Editors, 317-324. Morgan Kaufmann, San Mateo.

Sun, X. and Cheney, E. W. (1992). "The Fundamentals of Sets of Ridge Functions," Aequationes Math., 44,
226-235.

Suter, B. and Kabrisky, M. (1992). "On a Magnitude Preserving Iterative MAXnet Algorithm," Neural
Computation, 4(2), 224-233.

Sutton, R. (1986). "Two Problems with Backpropagation and Other Steepest-Descent Learning Procedures
for Networks," Proceedings of the 8th Annual Conference on the Cognitive Science Society (Amherst
1986), 823-831. Lawrence Erlbaum, Hillsdale.

Sutton, R. S., Editor. (1992). Special Issue on Reinforcement Learning, Machine Learning, 8, 1-395.

Sutton, R. S., Barto, A. G., and Williams, R. J. (1991). "Reinforcement Learning is Direct Adaptive
Optimal Control," in Proc. of the American Control Conference (Boston 1991), 2143-2146.

Szu, H. (1986). "Fast Simulated Annealing," in Neural Networks for Computing (Snowbird 1986), J. S.
Denker, Editor, 420-425. American Institute of Physics, New York.

Takefuji, Y. and Lee, K. C. (1991). "Artificial Neural Network for Four-Coloring Map Problems and K-
Colorability Problem," IEEE Transactions Circuits ad Systems, 38, 1991, 326-333.
Takens, F. (1981). "Detecting Strange Attractors in Turbulence," in Dynamical Systems and Turbulence.
Lecture Notes in Mathematics, vol. 898 (Warwick 1980), D. A. Rand and L.-S. Young, Editors, 366-381.
Springer-Verlag, Berlin.

Takeuchi, A. and Amari, S.-I. (1979). "Formation of Topographic Maps and Columnar Microstructures,"
Biol. Cybernetics, 35, 63-72.

Tank, D. W. and Hopfield, J. J. (1986). "Simple "Neural" Optimization Networks: An A/D Converter,
Signal Decision Circuit, and a Linear Programming Circuit," IEEE Transactions on Circuits and Systems,
33, 533-541.

Tank, D. W. and Hopfield, J. J. (1987). "Concentrating Information in Time: Analog Neural Networks with
Applications to Speech Recognition Problems," in IEEE First International Conference on Neural
Networks (San Diego 1987), M. Caudill and C. Butler, Editors, vol. IV, 455-468. IEEE, New York.

Tattersal, G. D., Linford, P. W., and Linggard, R. (1990). "Neural Arrays for Speech Recognition," in
Speech and Language Processing, C. Wheddon and R. Linggard, Editors, 245-290. Chapman and Hall,
London.

Tawel, R. (1989). "Does the Neuron 'Learn' Like the Synapse?" in Advances in Neural Information
Processing Systems 1 (Denver 1988), D. S. Touretzky, Editor, 169-176. Morgan Kaufmann, San Mateo.

Taylor, J. G. and Coombes, S. (1993). "Learning Higher Order Correlations," Neural Networks, 6(3), 423-
427.

Tesauro, G. and Janssens, B. (1988). "Scaling Relationships in Back-Propagation Learning," Complex


Systems, 2, 39-44.

Thierens, D. and Goldberg, D. (1993). "Mixing in Genetic Algorithms," Proceedings of the Fifth
International Conference on Genetic Algorithms (Urbana-Champaign 1993), S. Forrest, Editor, 38-45.
Morgan Kaufmann, San Mateo.

Thorndike, E. L. (1911). Animal Intelligence. Hafner, Darien, CT.

Ticknor, A. J. and Barrett, H. (1987). "Optical Implementations of Boltzmann Machines," Optical


Engineering, 26, 16-21.

Tishby, N., Levin, E., and Solla, S. A. (1989). "Consistent Inference of Probabilities in Layered Networks:
Predictions and Generalization," in International Joint Conference on Neural Networks (Washington 1989),
vol. II, 403-410. IEEE, New York.

Tolat, V. (1990). "An Analysis of Kohonen's Self-Organizing Maps Using a System of Energy Functions,"
Biological Cybernetics, 64, 155-164.

Tollenaere, T. (1990). "SuperSAB: Fast Adaptive Back Propagation with Good Scaling Properties," Neural
Networks, 3(5), 561-573.

Tompkins, C. B. (1956). "Methods of Steepest Descent," in Modern Mathematics for the Engineer, E. B.
Beckenbach, Editor, McGraw-Hill, New York.

Törn, A. A. and ilinskas, A. (1989). Global optimization. Springer-Verlag, Berlin.

Tsypkin, Ya. Z. (1971). Adaptation and Learning in Automatic Systems. Translated by Z. J. Nikolic.
Academic Press, New York. (First published in Russian language under the title Adaptatsia i obuchenie v
avtomaticheskikh sistemakh. Nauka, Moskow 1968)
Turing, A. M. (1952). "The Chemical Basis of Morphogenesis," Philosophical Transactions of the Royal
Society, Series B, 237, 5-72.

Uesaka, G. and Ozeki, K. (1972). "Some Properties of Associative Type Memories," Journal of the
Institute of Electrical and Communication Engineers of Japan, 55-D, 323-330.

Usui, S., Nakauchi, S., and Nakano, M. (1991). "Internal Color Representation Acquired by a Five-Layer
Network," Artificial Neural Networks, Proceedings of the 1991 International Conference on Artificial
Neural Networks (Espoo 1991), T. Kohonen, K. Mäkisara, O. Simula, and J. Kangas, Editors, vol. I, 867-
872. Elsevier Science Publishers B. V., Amsterdam.

Vapnik, V. N. and Chervonenkis, A. Y. (1971). "On the Uniform Convergence of Relative Frequencies of
Events to Their Probabilities," Theory of Probability and its Applications, 16(2), 264-280.

Veitch, E. W. (1952). " A Chart Method for Simplifying Truth Functions," Proc. of the ACM, 127-133.

Villiers, J. and Barnard, E. (1993). "Backpropagation Neural Nets with One and Two Hidden Layers,"
IEEE Transactions on Neural Networks, 4(1), 136-141.

Vogl, T. P., Manglis, J. K., Rigler, A. K., Zink, W. T., and Alkon, D. L. (1988). "Accelerating the
Convergence of the Back-propagation Method," Biological Cybernetics, 59, 257-263.

Vogt, M. (1993). "Combination of Radial Basis Function Neural Networks with Optimized Learning Vector
Quantization," in Proceedings of the IEEE International Conference on Neural Networks (San Francisco
1993), vol. III, 1841-1846. IEEE, New York.

Waibel, A. (1989). "Modular Construction of Time-Delay Neural Networks for Speech Recognition,"
Neural Computation, 1, 39-46.

Waibel, A., Hanazawa, T., Hinton, G., Shikano, K., and Lang, K. (1989). "Phoneme Recognition Using
Time-Delay Neural Networks," IEEE Transactions on Acoustics, Speech, and Signal Processing, 37, 328-
339.

Wang, C. (1991). A Robust System for Automated Decomposition of the Electromyogram Utilizing a Neural
Network Architecture. Ph.D. Dissertation, Department of Electrical and Computer Engineering, Wayne
State University, Detroit, Michigan.

Wang, Y.-F., Cruz Jr., J. B., and Mulligan Jr., J. H. (1990). "Two Coding Strategies for Bidirectional
Associative Memory," IEEE Transactions on Neural Networks, 1(1), 81-92.

Wang, Y.-F., Cruz Jr., J. B., and Mulligan Jr., J. H. (1991). "Guaranteed Recall of All Training Pairs for
Bidirectional Associative Memory," IEEE Transactions on Neural Networks, 2(6), 559-567.

Wasan, M. T. (1969). Stochastic Approximation. Cambridge University Press, New York.

Watta, P. B. (1994). "A Coupled Gradient Network Approach for Static and Temporal Mixed Integer
Optimization," Ph.D. Dissertation, Department of Electrical and Computer Engineering, Wayne State
University, Detroit, Michigan.

Waugh, F. R., Marcus, C. M., and Westervelt, R. M. (1991). "Reducing Neuron Gain to Eliminate Fixed-
Point Attractors in an Analog Associative Memory," Phys. Rev. A, 43, 3131-3142.

Waugh, F. R., Marcus, C. M., and Westervelt, R. M. (1993). "Nonlinear Dynamics of Analog Associative
Memories," in Associative Neural Memories: Theory and Implementation, M. H. Hassoun, Editor, 197-211.
Oxford University Press, New York.
Wegstein, J. H. (1958). "Accelerating Convergence in Iterative Processes," ACM Commun., 1(6), 9-13.

Weigend, A. S. and Gershenfeld, N. A. (1993). "Results of the Time Series Prediction Competition at the
Santa Fe Institute," Proceedings of the IEEE International Conference on Neural Networks (San Francisco
1993), vol. III, 1786-1793. IEEE, New York.

Weigend, A. S. and Gershenfeld, N. A., Editors (1994). Time Series Prediction: Forecasting the Future and
Understanding the Past. Proc. of the NTAO Advanced Research Workshop on Comparative Time Series
Analysis (Santa Fe 1992). Addison-Wesley, Reading MA.

Weigend, A. S., Rumelhart, D. E., and Huberman, B. A. (1991). "Generalization by Weight-Elimination


with Application to Forecasting," in Advances in Neural Information Processing Systems 3 (Denver 1990),
R. P. Lippmann, J. E. Moody, and D. S. Touretzky, Editors, 875-882. Morgan Kaufmann, San Mateo.

Weisbuch, G. and Fogelman-Soulié, F. (1985). "Scaling Laws for the Attractors of Hopfield Networks,"
Journal De Physique Lett., 46(14), L-623-L-630.

Werbos, P. (1974). "Beyond Regression: New Tools for Prediction and Analysis in the Behavioral
Sciences," Ph.D. Dissertation, Committee on Applied Mathematics, Harvard University, Cambridge, MA.

Werbos, P. J. (1988). "Generalization of Backpropagation with Application to Gas Market Model," Neural
Networks, 1, 339-356.

Werntges, H. W. (1993). "Partitions of Unity Improve Neural Function Approximators," in Proceedings of


the IEEE International Conference on Neural Networks (San Francisco 1993), vol. II, 914-918. IEEE, New
York.

Wessels, L. F. A. and Barnard, E. (1992). "Avoiding False Local Minima by Proper Initialization of
Connections," IEEE Transactions on Neural Networks, 3(6), 899-905.

Wettschereck, D. and Dietterich, T. (1992). "Improving the Performance of Radial Basis Function
Networks by Learning Center Locations," in Advances in Neural Information Processing Systems 4
(Denver 1991), J. E. Moody, S. J. Hanson, and R. P. Lippmann, Editors, 1133-1140. Morgan Kaufmann,
San Mateo.

White, H. (1989). "Learning in Artificial Neural Networks: A Statistical Perspective," Neural Networks, 1,
425-464.

White, S. A. (1975). "An Adaptive Recursive Digital Filter," in Proc. 9th Asilomar Conf. Circuits Syst.
Comput. (San Francisco 1975), 21-25. Western Periodicals, North Hollywood, CA.

Whitley, D. and Hanson, T. (1989). "Optimizing Neural Networks Using Faster, More Accurate Genetic
Search," in Proceedings of the Third International Conference on Genetic Algorithms (Arlington 1989), J.
D. Schaffer, Editor, 391-396. Morgan Kaufmann, San Mateo.

Widrow, B. (1987). "ADALINE and MADALINE - 1963," Plenary Speech, Proc. IEEE 1st Int. Conf. on
Neural Networks (San Diego 1982), vol. I, 143-158.

Widrow, B. and Angell, J. B. (1962). "Reliable, Trainable Networks for Computing and Control,"
Aerospace Eng., 21 (September issue), 78-123.

Widrow, B. and Hoff Jr., M. E. (1960). "Adaptive Switching Circuits," IRE Western Electric Show and
Convention Record, Part 4, 96-104.

Widrow, B. and Lehr, M. A. (1990). "30 Years of Adaptive Neural Networks: Perceptron, Madaline, and
Backpropagation," Proc. IEEE, 78(9), 1415-1442.
Widrow, B. and Stearns, S. D. (1985). Adaptive Signal Processing, Prentice-Hall, Englewood Cliffs.

Widrow, B., Gupta, N. K., and Maitra, S. (1973). "Punish/Reward: Learning with a Critic in Adaptive
Threshold Systems," IEEE Trans. on System, Man, and Cybernetics, SMC-3, 455-465.

Widrow, B., McCool, J. M., Larimore, M. G., and Johnson Jr., C. R. (1976). "Stationary and Nonstationary
Learning Characteristics of the LMS Adaptive Filter," Proc. IEEE, 64(8), 1151-1162.

Wieland, A. P. (1991). "Evolving Controls for Unstable Systems," in Connectionist Models: Proceedings of
the 1990 Summer School (Pittsburgh 1990), D. S. Touretzky, J. L. Elman, and G. E. Hinton, Editors, 91-
102. Morgan Kaufmann, San Mateo.

Wieland, A. and Leighton, R. (1987). "Geometric Analysis of Neural Network Capabilities," First IEEE
Int. Conf. on Neural Networks (San Diego 1987), vol. III, 385-392. IEEE, New York.

Wiener, N. (1956). I Am a Mathematician. Doubleday, NY.

Wilkinson, J. H. (1965). The Algebraic Eigenvalue Problem. Oxford University Press, Oxford, UK.

Williams, R. J. (1987). "A Class of Gradient Estimating Algorithms for Reinforcement Learning in Neural
Networks," in IEEE First International Conference on Neural Networks (San Diego 1987), M. Caudill and
C. Butler, Editors., vol. II, 601-608. IEEE, New York.

Williams, R. J. (1992). "Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement


Learning," Machine Learning, 8, 229-256.

Williams, R. J. and Zipser, D. (1989a). "A Learning Algorithm for Continually Running Fully Recurrent
Neural Networks," Neural Computation, 1(2), 270-280.

Williams, R. J. and Zipser, D. (1989b). "Experimental Analysis of the Real-Time Recurrent Learning
Algorithm," Connection Science, 1, 87-111.

Willshaw, D. J. and von der Malsburg, C. (1976). "How Patterned Neural Connections can be set up by
Self-Organization," Proceedings of the Royal Society of London, B 194, 431-445.

Winder, R. O. (1962). Threshold Logic, Ph.D. Dissertation, Dept. of Mathematics, Princeton University,
NJ.

Winder, R. O. (1963). "Bounds on Threshold Gate Realizability," IEEE Trans. Elec. Computers, EC-12(5),
561-564.

Wittner, B. S. and Denker, J. S. (1988). "Strategies for Teaching Layered Networks Classification Tasks,"
in Neural Information Processing Systems (Denver 1987), D. Z. Anderson, Editor, 850-859. American
Institute of Physics, New York.

Wong, Y.-F. and Sideris, A. (1992). "Learning Convergence in the Cerebellar Model Articulation
Controller," IEEE Trans. on Neural Networks, 3(1), 115-121.

Xu, L. (1993). "Least Mean Square Error Reconstruction Principle for Self-Organizing Neural-Nets,"
Neural Networks, 6(5), 627-648.

Xu, L. (1994). "Theories of Unsupervised Learning: PCA and its Nonlinear Extensions," IEEE
International Conference on Neural Networks, (Orlando 1994), vol. XXX, XXXpage numbers XXX, IEEE
Press.
Yanai, H. and Sawada, Y. (1990). "Associative Memory Network Composed of Neurons with Hysteretic
Property," Neural Networks, 3(2), 223-228.

Yang, L. and Yu, W. (1993). "Backpropagation with Homotopy," Neural Computation, 5(3), 363-366.

Yoon, Y. O., Brobst, R. W., Bergstresser, P. R., and Peterson, L. L. (1989). "A Desktop Neural Network for
Dermatology Diagnosis," Journal of Neural Network Computing, Summer, 43-52.

Yoshizawa, S., Morita, M. and Amari, S.-I. (1993a). "Capacity of Associative Memory Using a
Nonmonotonic Neuron Model," Neural Networks, 6(2), 167-176.

Yoshizawa, S., Morita, M., and Amari, S.-I. (1993b). "Analysis of Dynamics and Capacity of Associative
Memory Using a Nonmonotonic Neuron Model," in Associative Neural Memories: Theory and
Implementation, M. H. Hassoun, Editor, 239-248. Oxford University Press, New York.

Youssef, A. M. and Hassoun, M. H. (1989). "Dynamic Autoassociative Neural Memory Performance vs.
Capacity," Proc. SPIE, Optical Pattern Recognition, H.-K. Liu, Editor, 1053, 52-59.

Yu, X., Loh, N. K., and Miller, W. C. (1993). "A New Acceleration Technique for the Backpropagation
Algorithm," in IEEE International Conference on Neural Networks (San Francisco 1993), vol. III, 1157-
1161, IEEE, New York.

Yuille, A. L., Kammen, D. M., and Cohen, D. S. (1989). "Quadrature and the Development of Orientation
Selective Cortical Cells by Hebb Rules," Biological Cybernetics, 61, 183-194.

Zak, M. (1989). "Terminal Attractors in Neural Networks," Neural Networks, 2(4), 258-274.

Zhang, J. (1991). "Dynamics and Formation of Self-Organizing Maps," Neural Computation, 3(1), 54-66.
Subject Index
Fundamentals of Artificial Neural Networks
Mohamad H. Hassoun
MIT Press

A
a-LMS rule, 67
m-LMS rule, 67-69, 72
f--general positon, 20-22, 25
f-mapping, 20
f-separable, 20

dichotomy, 21, 25
f-space, 20-21, 25
f-surface, 20, 24-25
A analog-to-digital (A/D) convertor, 398, 413-414
activation function, 219
homotopy, 220
hyperbolic tangent, 77, 198, 201, 220, 332
hysteretic, 387-388
logistic function, 77, 198, 201
nondeterministic, 428
nonmonotonic, 385
nonsigmoidal, 49
sigmoidal, 46, 76
sign (sgn), 79, 347, 383
activity pattern, 433
activation slope, 220, 332
ADALINE, 66, 85
adaptive linear combiner element, 66
adaptive resonance theory (ART) networks, 323-328
adjoint net, 270
admissible pattern, 7
AHK see Ho-Kashyap learning rules
AI, 51 algorithmic complexity, 51
ALVINN, 244-246
AND, 3
gate/unit, 36, 302
ambiguity, 24, 182
ambiguous response, 26
analog optical implementations, 53
analog VLSI technology, 53
annealing
deterministic, 369
schedule, 428
see also simulated annealing anti-Hebbian learning, 100
approximation
capability, 48, 221
function, 144
theory, 144
architecture
adaptive resonance, 323
AND-OR network, 35-36
bottleneck, 250, 332
CCN, 319
fully recurrent, 259
multilayer feedforward, 38, 40, 43, 184, 197, 261
partially recurrent, 259
randomly interconnected feedforward, 38,40
recurrent, 254
threshold-OR net, 36
time-delay, 254
unit-allocating, 318
artificial intelligence, 51
artificial neuarl network, 1
artificial neuron, 1
ART, 323-328
association
auto, 346
hetero, 346
pairs, 346
associative memory, 271, 345
absolute capacity, 363, 365, 381, 385, 387, 390
basin of attraction, 366, 374-375, 378, 388
cross-talk, 347, 348, 364, 381, 383
default memory, 353
dynamic (DAM), 345, 353-374
error correction, 348, 352, 363, 366, 370-371, 385, 389
fundamental memories, 365, 374
ground state, 375
high-performance, 374
linear (LAM), 346
low-performance, 375
noise-suppression, 351
optimal linear (OLAM), 350-351
oscillation, 367, 373
oscillation-free, 373
performance characteristics, 374
performance criteria, 374-375
recording/storage recipe, 346
relative capacity, 363, 366-367, 388
simple, 346
spurious memories, 359, 363, 372, 374-375, 382
variations on, 275-394
see also DAM
associative reward-penalty, 89
asymptotic stability, 268, 358
asymptotically stable points, 148
attractor state, 265, 270, 354, 366, 375
autoassociative net, 248
autoassociative clustering net, 328
autocorrelation matrix, 71, 91, 97-98, 150
automatic scaling, 325
average entropic error, 186
average generalization ability, 183
average learning equation, 147, 176
average prediction error, 183

backprop, 199-202, 234, 271, 455

see also backpropagation

backprop net, 318


backpropagation, 199-202
activation function, 219-221
applications, 234-253
basin of attraction, 203, 208
batch mode, 202
convergence speed, 211
criterion (error) functions, 199, 202, 230-234
derivation,199-201
example, 203-205,
generalization phenomenon, 229
incremental, 201, 202-203, 213, 224, 271
Langevin-type, 424
learning rate, 199-202, 211-213
local minima, 203, 358, 421
momentum, 213-218
network, 198
recurrent, 265-271
second-order, 218
stochastic, 202
through time, 259-262
time-dependent recurrent, 271-274
variations on, 211-226, 230-234
weight initialization, 210-211
weight decay, 221, 225
basin of attration, 328
basis function, 287
Gaussian, 287
batch mode/update, 63, 81, 90, 167, 172, 290
Bayes decision theory, 112
Bayes classifier, 310
Bayes' rule, 434
bell-shaped function, 49
Bernstein polynomials, 49
bias, 58, 197, 308, 354, 376
bidirectional associative memory (BAM), 393
binary representation, 398
binomial
distribution, 364
expansion, 364
theorem, 445
bit-wise complementation, 441
boarder aberration effects, 116
Boltzmann machine, 431-432
Boltzmann constant, 422, 426
Boltzmann-Gibbs distribution, 426, 431-432
Boltzmann learning, 431
Boolean functions, 3-4, 35, 42, 188
threshold, 5
nonthreshold, 22
random, 304
see also AND, XNOR, XOR
bottleneck, 250
brain-state-in-a-box (BSB) model, 331, 375-381
see also DAM
building block hypothesis, 446
building blocks, 446
Butz's rule, 63

calculus of variations, 272


calibration, 107

process, 107
capacity, 17, 29, 380
Hopfield network, 363-365
linear threshold gate (LTG), 19, 41
polynomial threshold gate (PTG), 21
see also associative memory capacity
cascade-correlation net (CCN), 318-322
CCN, 318-322
center of mass, 105
central limit theorem, 364, 436
cerebeller model articulation controller (CMAC), 301-304
relation to Rosenblatt's perceptron, 304-309
cerebellum, 301
chaos hypothesis, 416
Chebyshev polynomial, 49
classifier, 50, 306, 311
classifier system, 461
classifiers, 461
cluster membership matrix, 167
cluster granuality, 331, 334
clusters, 107, 125, 168, 288
clustering, 106, 171, 328
behavior, 326
network, 106, 322
CMAC, 301-304, 306
codebook, 110
combinatorial complexity, 395
combinatorial optimization, 120, 429
compact representation, 311
competitive learning, 103, 167, 290
stochastic analysis, 168
deterministic analysis, 167
complexity, 51
algorithmic, 51
Kolmogorov, 51
learning, 180, 187, 310
polynomial, 318
space, 51
time, 51
computational complexity, 269
computational energy, 52, 357
concept forming cognitive model, 328
conditional probability density, 85
conjugate gradient method, 217-218
connections
lateral, 100, 173
lateral-inhibitory, 323
self-excitatory, 323
constraints satisfaction term, 396
controller, 264
convergence in the mean, 69
convergence phase, 116
convergence-inducing process, 424
convex, 70
cooling schedule, 427-428
correlation learning rule, 76, 148
correlation matrix, 91, 102, 346
eigenvalues, 91
eigenvectors, 91
correlation memory, 346
cost function, 143, 395, 397
see also criterion function
cost term, 396
Coulomb potential, 312
covariance, 320
critic, 88
critical features, 306-308
cross-correlations, 91
cross-correlation vector, 71
crossing site, 441
crossover see genetic operators
cross-validation, 187, 226-230, 290
criterion function, 58, 63, 68, 127-133, 143, 145, 155, 195, 230-234
backprop, 199
Durbin and Willshaw, 141
entropic, 86, 89, 186, 220, 230
Gordon et al., 137
Kohonen's feature map, 171
mean-square error (MSE), 71
Minkowski-r, 83, 168, 230, 231
perceptron, 63, 65, 86
sum of squared error (SSE), 68, 70, 79, 82, 87, 288, 353
travelling salesman problem,
well-formed, 86, 231
see cost function
criterion functional, 271
critical overlap, 384, 386
curve fitting, 221
Cybenko's theorem, 47
training cycle, 59
training pass, 59

DAM, 353-374

bidirectional (BAM), 393


BSB, 375-381
correlation, 363, 381
combinatorial optimization, 394-399
exponential capacity, 389-391
heteroassociative, 392-394
hysteretic activations, 386-388
nonmonotonic activations, 381
projection, 369-374
sequence generator, 391-392
see also associative memory
data compression, 109
dead-zone, 62
DEC-talk, 236
decision
boundary, 311
hyperplane, 62
region, 311
surface, 16, 24
deep net, 318
degrees of freedom, 21, 27, 29, 332, 425
tapped-delay lines, 254
delta learning rule, 88, 199, 289, 291, 455, 459
density-preserving feature, 173
desired associations, 346
desired response, 268
deterministic annealing, 369
deterministic unit, 87
device physics, 52
diffusion process, 421
dichotomy, 15, 25, 43
linear, 16-18, 25
machine, 185
dimensionality
expansion, 188
reduction, 97, 120
direct Ho-Kashyap (DHK) algorithm, 80
discrete-time states, 274
distributed representation, 250, 308-309, 323, 333
distribution
Cauchy, 232
Gaussian, 231 ,
Laplace, 231
non-Gaussian, 231
don't care states, 12
dynamic associative memory
see DAM
dynamic mutation rate, 443
dynamic slope, 332

eigenvalue, 92, 361, 376


eigenvector extraction, 331
elastic net, 120
electromyogram (EMG) signal, 334
EMG, 334-337
encoder, 247
energy function, 357, 362, 376, 393, 426, 420, 432
truncated, 359
entropic loss function, 186
entropy, 433
environment
nonstationary, 153
stationary, 153
ergodic, 147
error-backpropagation network
see backpropagation
error function, 70, 143, 199, 268
see also criterion function
error function, erf(x), 364
error rate, 348
error suppressor function, 232
estimated target, 201
Euclidean norm, 288
exclusive NOR
see XNOR
exclusive OR, 5
see XOR
expected value, 69
exploration process, 424
exponential decay laws, 173
extrapolation, 295
extreme inequalities, 189
extreme points, 27-29
expected number, 28-29

false-positive classification error, 294-295


feature map(s), 171, 241
Feigenbaum time-series, 278
Fermat's stationarity principle, 418
filter
FIR, 254
IIR, 257
filtering,
linear, 154
nonlinear, 257
finite difference approximation, 272
fitness, 440
fitness function, 439, 457
multimodal, 443
see also genetic algorithms
fixed point method, 360
fixed points,
flat spot, 77, 86, 211, 219, 220, 231
elimination, 77
forgetting
term, 146, 157
effects, 179
free parameters, 58, 288, 295, 311, 354
effective, 233
function approximation, 46, 294, 299, 319
function counting theorem, 17
function decomposition, 296
fundamental theorem of genetic algorithms, 443

GA see genetic algorithm


GA-assisted supervised learning, 454
GA-based learning methods, 452
GA-deceptive problems, 446
gain, 357
Gamba perceptron, 308
Gaussian-bar unit
see units Gaussian distribution
see distribution Gaussian unit
see unit general learning equation, 145
general position, 16-17, 41
generalization, 24, 145, 180, 186, 221, 226, 243, 295
ability, 182
average, 180
enforcing, 233
error, 26-27, 180, 185, 187, 226
local, 302, 307
parameter, 301-302
theoretical framework, 180-187
worst case, 180, 184-185
generalized inverse, 70, 291
see also pseudo-inverse
genetic algorithm (GA), 439-447
example, 447-452
genetic operators, 440, 442
crossover, 440, 442, 445
mutation, 440, 443, 445
reproduction, 440
global
descent search, 206, 421
fit, 295
minimal solution, 395
minimum, 70, 417
minimization, 419
optimization, 206, 419, 425
search strategy, 419
Glove-Talk, 236-240
gradient descent search, 64, 200, 202, 418-419, 288, 300, 418-419
gradient-descent/ascent startegy, 420
gradient net, 394-399
gradient system, 148, 358, 376, 396-397
Gram-Schmidt orthogonalization, 217
Greville's theorem, 351
GSH model, 307, 310
guidance process, 424

Hamming Distance, 371


hypersphere, 365
net, 408
normalized, 307, 349, 365
handwritten digits, 240
hard competition, 296-297
hardware annealing, 396, 438
Hassoun's rule, 161
Hebb rule, 91
normalized, 100
Hermitian matrix, 91
Hessian matrix, 64, 70, 98, 148-149, 212, 215, 418
hexagonal array, 124
hidden layer, 197-198, 286
weights, 200
hidden targets, 455
hidden-target space, 455-456, 458
hidden units, 198, 286, 431
higher-order statistics, 101, 333
higher-order unit, 101, 103
Ho-Kashyap algorithm, 79, 317
Ho-Kashyap learning rules, 78-82
Hopfield model/net, 354, 396-397, 429
capacity, 19
continuous, 360
discrete, 362
stochastic, 430-431, 437
hybrid GA/gradient search, 453-454
hybrid learning algorithms, 218, 453
hyperbolic tangent activation function
see activation function
hyperboloid, 316
hypercube, 302, 357, 376, 396
hyperellipsoid, 316
hyperplane, 5, 16-17, 25, 59, 311
hypersurfaces, 311
hyperspheres, 315
hyperspherical classifiers, 311
hysteresis, 386-387
term, 387

ill-conditioned, 292
image compression, 247-252
implicit parallelism, 447
incremental gradient descent, 64, 202
incremental update, 66, 88, 202
input sequence, 261
input stimuli, 174
instantaneous error function, 268
instantaneous SSE criterion function, 77, 199
interconnection
matrix, 330, 346, 357
weights, 350
interpolation, 223, 290
function, 291
matrix, 292
quality, 292
Ising model, 429
isolated-word recognition, 125

Jacobian matrix, 149


joint distribution, 85

Karnaugh map (K-map), 6, 36


K-map technique, 37
k-means clusterting, 167, 290
incremental algorithm, 296
k-nearest neighbor classifier, 204, 310, 318
Karhunen-Lo‚ve transform, 98
see also principal component analysis
kernel, 287
unit, 290
key input pattern, 347-348
kinks, 172-173
Kirchoff's current law, 354
Kohonen's feature map, 120, 171
Kolmogorov's theorem, 46-47
Kronecker delta, 268

Lagrange multiplier, 145, 272


LAM
see associative memory Langevin-type learning, 424
Laplace-like distribution, 85
lateral connections, 101
lateral inhibition, 103
lateral weights, 173
distribution, 173
leading eigenvector, 97
learning
associative, 57
autoassociative, 328
Boltzmann, 431, 439
competitive, 105, 290
hybrid, 218
hybrid GA/gradient descent, 453, 457
Langevin-type, 424
leaky, 106
on-line, 295
parameter, 153
reinforcement, 57, 87-89, 165
robustness, 232
signal, 146
supervised, 57, 88, 289
temporal, 253
unsupervised, 57
see also learning rule
learning curve, 183-184, 186
learning rate/coefficient, 59, 71, 173
adative, 110, 173
dynamic, 208
search-then-converge, 213
optimal, 212
learning rule, 57
anti-Hebbian, 101, 434, 436
associative reward penalty, 89
backprop, 199-202, 455
Boltzmann, 433-434
Butz, 63
competitive, 105
correlation, 76, 148
covariance, 76
delta, 88, 199, 291, 459
global descent, 206-209
Hassoun, 161
Hebbian, 91, 151, 154-155, 176, 434,436
Ho-Kashyap, 78-82, 304, 308
Linsker, 95-97
LMS, 67-69, 150, 304, 330, 454
Mays, 66
Oja, 92, 155-156
perceptron, 58, 304
pseudo-Hebbian, 178
Sanger, 99
Widrow-Hoff, 66, 330
Yuille et al., 92, 158
learning vector quantization, 111
least-mean-square (LMS) solution, 72
Levenberg-Marquardt optimization, 218
Liapunov
asymptotic stability theorem, 358
first method, 149
function, 150, 331, 334, 357-361, 430
global asymptotic stability theorem , 359
second (direct) method, 358
limit cycle, 362, 367, 371, 378
linear array, 120
linear associative memory, 346
linear matched filter, 157
linear programming, 301, 316-317
linear threshold gate (LTG), 2, 304, 429
linear separability, 5, 61, 80
linear unit, 50, 68, 99, 113, 286, 346
linearly dependent, 350
linearly separable mappings, 188, 455
Linsker's rule, 95-97
LMS rule, 67-69, 150, 304, 330
batch, 68, 74
incremental, 74
see learning rules
local
encoding, 51
excitation, 176-177
local fit, 295
local maximum, 64, 420
local minimum, 109, 203, 417
local property, 157
locality property, 289
locally tuned
units, 286
representation, 288
response, 285
log-likelihood function, 85
logic sell array, 304
logistic function, 77, 288
also see activation function
lossless amplifiers, 397
lower bounds, 41, 43
LTG, 2, 304
network, 35
see linear threshold gate
LTG-realizable, 5
LVQ, 111

Mackey-Glass time-series, 281, 294, 322


Manhattan norm, 84
matrix differential calculus, 156
margin, 62, 81
vector, 79-80
MAX operation, 442
max selector, 407
maximum likelihood, 84
estimate, 85, 231, 297
estimator, 186
Mays's rule, 66
McCulloch-Pitts unit
see linear threshold gate
medical diagnosis expert net, 246
mean-field
annealing, 436, 438
learning, 438-439
theroy, 437-438
mean transitions, 436
mean-valued approximation, 436
memorization, 222
memory
see associative memory
memory vectors, 347
Metropolis algorithm, 426-427
minimal disturbance principle, 66
minimal PTG realization, 21-24
minimum Euclidean norm solution, 350
minimum energy configuration, 425
minimum MSE solution, 72
minimum SSE solution, 71, 76, 144, 150, 291
Minkowski-r criterion function, 83, 230
see criterion functions
Minkowski-r weight update rule, 231- 232
minterm, 3
misclassified, 61, 65
momentum, 213-214
motor unit,
potential, 334-337
moving-target problem, 318
multilayer feedforward networks
see architecture
multiple point crossover, 451, 456
multiple recording passes, 352
multipoint search strategy, 439
multivatiate function, 420
MUP, 337
mutation
see genetic operators

NAND, 6
natural selection, 439
neighborhood function, 113, 171, 173, 177-180
NETtalk, 234-236
neural field, 173
potential, 174
neural network architecture
see architecture
neural net emulator, 264
Newton's method, 64, 215-216
approximate, 215, 242
nonlinear activation function, 102
nonlinear dynamical system, 264
nonlinear repreasentations, 252
nonlinear separability, 5, 63, 80
nonlinearly separable
function, 12
mapping, 188
problems, 78
training set, 66, 306
nonstationary process, 154
nonstationary, input distribution, 326
nonthreshold function, 5, 12
NOR, 6
nonlinear PCA, 101, 332
NP-complete,
normal distribution, 84
nonuniversality, 307
Nyquist's sampling criterion, 292

objective function, 63, 143, 395, 417, 419


see also criterion function
off-line training, 272
Oja's rule
1-unit rule, 92, 155-156
multuple-unit rule, 99
Oja's unit, 157
OLAM
see associative memory
on-center off-surround, 173-174
on-line classifier, 311
on-line implementations, 322
on-line training, 295
optical interconnections, 53
optimal learning step, 212, 279
optimization, 417, 439
OR, 3
gate/unit, 36, 302
ordering phase, 116
orthonormal vectors, 347
orthonormal set, 15
outlier data (points), 168, 231, 317 overfitting, 145, 294, 311, 322
overlap, 371, 382
critical, 384, 386
overtraining, 230

parity function, 35-37, 308, 458


partial recurrence, 264
partial reverse dynamics, 383
partial reverse method, 385
partition of unity, 296
pattern completion, 435
pattern ratio, 349, 367, 370
pattern recognition, 51
PCA net, 99, 163, 252
penalty term, 225
perceptron criterion function, 63, 65, 86
perceptron learning rule, 58-60, 304
convergence proof, 61-62
perfect recall, 347, 350
phase diagram, 367
origin phase, 372
oscillation phase, 372
recall phase, 372
spin-glass phase, 372
phonemes, 235
piece-wise linear operator, 376
phonotopic map, 124
plant identification, 256, 264
Polack Ribi‚re rule, 217
polynomial, 15, 144, 224
approximation, 16, 222, 291
training time, 189, 310
polynomial complexity, 318, 395
polynomial threshold gate (PTG), 8, 287, 306
ploynomial-time classifier (PTC), 316-318
population, 440
positive definite, 70, 149, 361, 418
positive semidefinite, 150
postprocessing, 301
potential
energy, 422
function, 147, 312
field, 177
power dissipation, 52
power method, 92, 212, 330
prediction error, 85
prediction set, 230
premature convergence, 428
premature saturation, 211
preprocessing, 11
layer, 11, 102
principal component(s), 98, 101, 252
analysis (PCA), 97, 332
nonlinear, 101, 253
subspace, 250
principal directions, 98
principal eigenvector, 92, 164
principal manifolds, 252
probability of ambiguous response, 26-27
prototype extraction, 328
phase, 333
prototype unit, 322, 324
pruning, 225, 301
pseudo-inverse, 70, 79, 353
pseudo-orthogonal, 347
PTC net, 316-318
PTG
see polynomial threshold gate

QTG
see quadratic threshold gate
quadratic form, 155, 361
quadratic function, 357
quadratic threshold gate (QTG), 7, 316
quadratic unit, 102
quantization, 250
quickporp, 214

radial basis functions, 285-287


radial basis function (RBF) network, 286-294
radially symmetric function, 287
radius of attraction, 390
random motion, 422
random problems, 51
Rayleigh quotient, 155
RBF net, 285, 294
RCE, 312-315
RCE classifier, 315
real-time learning, 244
real-time recurrent learning (RTRL) method, 274-275
recall region, 367
receptive field, 287, 292
centers, 288
localized, 295, 301
overlapping, 302
semilocal, 299
width, 288
recombination mechanism, 440
reconstruction vector, 109
recording recipe, 346
correlation, 346, 359, 376
normalized correlation, 348
projection, 350, 370, 386
recurrent backpropagation, 265-271
recurrent net, 271 , 274
see also architecture
region of influence, 311, 313, 316
regression function, 73
regularization
effects, 163
smoothness, 296
term, 145, 150, 152, 225, 232, 340
reinforcement learning, 57, 87
reinforcement signal, 88, 90, 165 relative entropy error measure, 85, 230, 433
see also criterion function
relaxation method, 359
repeller, 207
replica method, 366
representation layer, 250
reproduction
see genetic operators
resonance, 325
see also adaptive resonance theory
resting potential, 174
restricted Coulomb energy (RCE) network, 312-315
retinotopic map, 113
retrieval
dynamics, 382
mode, 345
multiple-pass, 366, 371
one-pass, 365
parallel, 365
properties, 369
Riccati differential equation, 179
RMS error, 202
robust decision surface, 78
robust regression, 85
robustness, 62
robustness preserving feature, 307, 309
root-mean-square (RMS) error, 202
Rosenblatt's perceptron, 304
roulette wheel method, 440, 442
row diagonal dominant matrix, 379
S

saddle point, 358


Sanger's PCA net, 100
Sanger's rule, 99, 163
search space, 452
self-connections, 388
self-coupling terms, 371
self-scaling, 325
self-stabilization property, 323
sensitized units, 106
sensor units, 302
scaling
see computational complexity
schema (schemata), 443, 447
order, 444
defining length, 444
search,
direction, 217
genetic, 439-452
global, 206-209, 419
gradient ascent, 419
gradient descent, 64, 268, 288, 419
stochastic, 431
second-order search method, 215
secong-order statistics, 101
self-organization, 112
self-organizing
feature map, 113, 171
neural field, 173
semilocal activation, 299
sensory mapping,
separating capacity, 17
separating hyperplane, 18
separating surface, 21
nonlinear, 14
sequence
generation, 391
recognition, 254-255
reproduction, 254-256
series-parallel identification model, 257
shortcut connections, 321
sigmoid function, 48, 144
see also activation functions
sign function
see activation functions
signal-to-noise ratio, 375, 402
similarity
measure, 168
relationship, 328
simulated annealing, 425-426
algorithm, 427, 431
single-unit training, 58
Smoluchowski-Kramers equation, 422
smoothness regularization, 296
SOFM, 125
see also self-organizing feature map
soft competition, 296, 298
soft weight sharing, 233
solution vector, 60
somatosensory map, 113, 120
space-filling curve, 120
sparse binary vectors, 409
spatial associations, 265
specialized associations, 352
speech processing, 124
speech recognition, 125, 255
spin glass, 367
spin glass region. 367
spurious cycle, 393
spurious memory
see associative memory
SSE
see criterion function
stability-plasticity dilemma, 323
stable categorization, 323
state space, 375
detectors, 302
trajectories, 273
state variable, 263
static mapping, 265
statistical mechanics, 425, 429
steepest gardient descent method, 64
best-step, 216
optimal, 216 , 279
Sterling's approximation, 42
stochastic
approximation, 71, 73
algorithm, 147
differential equation, 147, 179
dynamics, 148
force, 422
global search, 431
gradient descent, 109, 421, 425
learning equation, 148
network, 185, 428, 438
optimization, 439
process, 71-72, 147
transitions, 436
unit, 87, 165, 431
Stone-Weierstrass theorem, 48
storage capacity
see capacity
storage recipes, 345
strict interpolation, 291
method, 293
string, 439
strongly mixing process, 152, 176, 179
supervised learning, 57, 88, 455
also see learning rules
sum-of-products, 3
sum of squared error, 68
superimposed patterns, 328
survival of the fittest, 440
switching
algebra, 3
functions, 3-4, 35
theory, 35
see also Boolean functions
symmetry-breaking mechanism, 211
synapse, 52
synaptic efficaces, 90
synaptic signal
post-, 90
pre-, 90
synchronous dynamics
see dynamics

Taken's theorem, 256


tapped delay lines, 254-259
teacher forcing, 275
temperature, 422, 425
templates, 109
temporal association, 254 , 259, 262-265, 391
temporal associative memory, 391
temporal learning, 253- 275
terminal repeller, 207
thermal energy, 426
thermal equilibrium, 425, 434, 437
threshold function, 5
threshold gates, 2
linear, 2
polynomial, 8
quadratic, 7
threshold logic, 1, 37
tie breaker factor, 326
time-delay neural network, 254-259
time-series prediction, 255, 273
topographic map, 112
topological ordering, 116
trace, 351
training error, 185, 227
training with rubbish, 295
training set, 59, 144, 230
transition probabilities, 426
travelling salesman problem (TSP), 120
truck backer-upper problem, 263
truth table, 3
TSP
see travelling salesman problem
tunneling, 206, 208, 421
Turing machine, 275
twists, 172-173
two-spiral problem, 321

unfolding, 261
in time, 265
unit
Gaussian, 299-300, 318
Gaussain-bar, 299-300
hysteretic, 388
linear, 50, 68, 99, 113, 286, 346
quadratic, 102
sigmoidal, 299-300, 318
unit allocating net, 310, 318
unit elimination, 221
unity function, 296
universal approximation, 198, 221, 290, 304, 454
universal approximator, 15, 48, 287, 308
of dynamical systems, 273
universal classifier, 50
universal logic, 36
gate, 6
unsupervised learning, 57, 90
competitive, 105, 166
Hebbian, 90, 151
self-organization, 112
see also learning rule
update
nous, 360
parallel, 369
serially, 369
upper bound, 43

V
validation error, 226
validation set, 227, 230
Vandermonde's determinant, 32
VC dimension, 185
vector energy, 60, 352
vector quantization, 103, 109-110
Veitch diagram, 6
vigilance parameter, 325-328, 333
visible units, 431
VLSI, 304
analog, 53
Voronoi
cells, 110
quantizer, 110
tessellation, 111
vowel recognition, 298

Weierstrass's approximation theorem, 15


weight, 2
decay, 157, 163, 187, 221 , 225
decay term, 157
elimination, 221
initialization, 210
sharing, 187, 233 , 241
space, 458
update rule, 64
vector, 2

weight-sharing interconnections, 240


weighted sum, 2-3, 288, 382, 428
Widrow-Hoff rule, 66, 330
Wiener weight vector, 72
winner-take-all, 103
competition, 104, 170
network, 104, 323-324, 408
operation, 104, 297
see also competitive learning
winner unit, 113, 324

XNOR, 14
XOR, 5

Yuille et al. rule, 92, 158


Z

ZIP code recognition, 240-243


Errata Sheet for M. H. Hassoun,
Fundamentals of Artificial Neural
Networks (MIT press, 1995)
View as .jpg Image

Page 1
Page 2
Page 3
Page 4

View as Postscript

Errata Sheet

You might also like