TO Artificial Neural Networks
TO Artificial Neural Networks
TO Artificial Neural Networks
TO
ARTIFICIAL NEURAL NETWORKS
CONTENTS
INTRODUCTION
BIOLOGICAL NEURON MODEL
ARTIFICIAL NEURON MODEL
ARTIFICIAL NEURAL NETWORK
NEURAL NETWORK ARCHITECTURE
LEARNING
BACKPROPAGATION ALGORITHM
APPLICATIONS
ADVANTAGES
CONCLUSION
INTRODUCTION
Neural is an adjective for neuron, and network denotes a graph like
structure.
Artificial Neural Networks are also referred to as neural nets , artificial
neural systems, parallel distributed processing systems, connectionist
systems.
For a computing systems to be called by these pretty names, it is necessary
for the system to have a labeled directed graph structure where nodes
performs some simple computations.
Directed Graph consists of set of nodes(vertices) and a set of
connections(edges/links/arcs) connecting pair of nodes.
A graph is said to be labeled graph if each connection is associated with a
label to identify some property of the connection
CONTD multiplier
x1{0,1}
x1
w1 (x1 w1)
x1 x2
(x2w2)
o = x1 AND x2 o = x1 AND x2
w2
x2{0,1} x2
The field of neural network was pioneered by BERNARD WIDROW of Stanford University in
1950s.
BIOLOGICAL NEURON MODEL
sum = w1 x1 + + wnxn
These products are simply summed, fed wn
through the transfer function, f( ) to generate
xn
a result and then output.
TERMINOLOGY
Neuron Node/Unit/Cell/Neurode
Synapse Connection/Edge/Link
Hidden layers
connections Desired
output
Neural network
Input Actual
Including output
output layer output
connections Comp
(called weights) are
between neuron
Input layer
Fig 1 : artificial neural network model Figure showing adjust of neural network
NEURAL NETWORK ARCHITECTURES
Hidden node
Input node
Hidden Layer
Fig: fully connected network
fig: layered network
The neural network in which every node is
connected to every other nodes, and
these connections may be either These are networks in which nodes are
excitatory (positive weights), inhibitory partitioned into subsets called layers, with
(negative weights), or irrelevant (almost no connections from layer j to k if j > k.
zero weights).
CONTD
Layer0 Layer3
(Input layer) (Output layer) Layer0 Layer3
Layer 1 Layer2
(Input layer) (Output layer)
Layer 1 Layer2
Hidden Layer
Hidden Layer
Fig : Acyclic network fig : Feedforward network
This is the subclass of the layered
networks in which there is no intra-layer
connections. In other words, a This is a subclass of acyclic
connection may exist between any node networks in which a connection is
in layer i and any node in layer j for i < j, allowed from a node in layer i only
but a connection is not allowed for i=j. to nodes in layer i+1
CONTD
output
the adjustment of each weight (wji ) will be the negative of a constant eta ()
multiplied by the dependance of the wji previous weight on the error of the network.
First, we need to calculate how much the error depends on the output
Next, how much the output depends on the activation, which in turn depends on the
weights
If we want to adjust vik, the weights (lets call them vik ) of a previous
layer, we need first to calculate how the error depends not on the
weight, but in the input from the previous layer i.e. replacing w by x
as shown in below equation.
Inputs, x
output
where
and
Weights, v weights, w
NEURAL NETWORK
APPLICATIONS
ADVANTAGES
It involves human like thinking.
They handle noisy or missing data.
They can work with large number of variables or parameters.
They provide general solutions with good predictive accuracy.
System has got property of continuous learning.
They deal with the non-linearity in the world in which we live.
CONCLUSION