Artificial Neural Networks

Download as pdf or txt
Download as pdf or txt
You are on page 1of 34

Artificial Neural

Networks
Overview

1. Biological inspiration
2. Artificial neurons and neural networks
3. Learning processes
4. Learning with artificial neural networks
Biological inspiration
Animals are able to react adaptively to changes in their
external and internal environment, and they use their nervous
system to perform these behaviours.
An appropriate model/simulation of the nervous system
should be able to produce similar responses and behaviours in
artificial systems.
The nervous system is build by relatively simple units, the
neurons, so copying their behavior and functionality should be
the solution.
Biological inspiration

Dendrites

Soma (cell body)

Axon
Biological inspiration
dendrites
axon

synapses

The information transmission happens at the synapses.


The Neuron - A Biological Information Processor

• dendrites - the receivers


• soma - neuron cell body (sums input signals)
• axon - the transmitter
• synapse - point of transmission
• neuron activates after a certain threshold is met
Learning occurs via electro-chemical changes in
effectiveness of synaptic junction.
From Biological to Artificial Neurons

• An Artificial Neuron - The Perceptron


• simulated on hardware or by software
• input connections - the receivers
• node, unit simulates neuron body
• output connection - the transmitter
• activation function employs a threshold or bias
• connection weights act as synaptic junctions
Learning occurs via changes in value of the connection
weights.
The Perceptron
An Artificial Neuron - The Perceptron

• Basic function of neuron is to sum


inputs, and produce output given
sum is greater than threshold
• ANN node produces an output as
follows:
1. Multiplies each component of the
input pattern by the weight of its
connection
2. Sums all weighted inputs and
subtracts the threshold value =>
total weighted input
3. Transforms the total weighted
input into the output using the
activation function
How do ANNs work?

An artificial neuron is an imitation of a human neuron


How do ANNs work?

xm ......... x2 x1
Input ...

wm w2
weights ... w1
..
Processing ∑
Transfer Function
(Activation Function) f(vk)

Output y
The output is a function of the input, that is
affected by the weights, and the transfer
functions
Artificial Neural Networks
 An ANN can:
1. compute any computable function, by the appropriate
selection of the network topology and weights values.
2. learn from experience!
1. Specifically, by trial‐and‐error
Multilayer Perceptron
• Input Layer
• Hidden Layers
• Output Layer

x1

xn
MPL
• Weights Updates
• Back propagation
• Two phases of computation:
– Forward pass: run the NN and compute the error for each
neuron of the output layer.
– Backward pass: start at the output layer, and pass the
errors backwards through the network, layer by layer, by
recursively computing the local gradient of each neuron.
Learning by trial‐and‐error
Continuous process of:
Trial:
Processing an input to produce an output (In terms of ANN:
Compute the output function of a given input)
Evaluate:
Evaluating this output by comparing the actual output
with the expected output.
Adjust:
Adjust the weights.
Design Issues
 Initial weights (small random values
[‐1,1])
 Transfer function (How the inputs and
the weights are combined to produce
output?)
 Error estimation
 Weights adjusting
 Number of neurons
 Data representation
 Size of training set
Transfer Functions – Activation
Function
 Linear: The output is proportional to the total
weighted input.
 Threshold: The output is set at one of two values,
depending on whether the total weighted input is
greater than or less than some threshold value.
 Non‐linear: The output varies continuously but not
linearly as the input changes.
Error Estimation
 The root mean square error (RMSE) is a frequently-
used measure of the differences between values
predicted by a model or an estimator and the values
actually observed from the thing being modeled or
estimated
Weights Adjusting
 After each iteration, weights should be adjusted to
minimize the error.
– All possible weights
– Back propagation
Back Propagation
 Back-propagation is an example of supervised learning is
used at each layer to minimize the error between the
layer’s response and the actual data
 The error at each hidden layer is an average of the
evaluated error
 Hidden layer networks are trained this way
Back Propagation
 N is a neuron.
 Nw is one of N’s inputs weights
 Nout is N’s output.
 Nw = Nw +Δ Nw
 Δ Nw = Nout * (1‐ Nout)* NErrorFactor
 NErrorFactor = NExpectedOutput – NActualOutput
 This works only for the last layer, as we can know
the actual output, and the expected output.
Number of neurons
 Many neurons:
 Higher accuracy
 Slower
 Risk of over‐fitting
 Memorizing, rather than understanding
 The network will be useless with new problems.

 Few neurons:
 Lower accuracy
 Inability to learn at all
 Optimal number.
Data representation
 Usually input/output data needs pre‐processing
 Pictures
 Pixel intensity
 Text:
 A pattern
Size of training set
 No one‐fits‐all formula
 Over fitting can occur if a “good” training set
is not chosen
 What constitutes a “good” training set?
 Samples must represent the general population.
 Samples must contain members of each class.
 Samples in each class must contain a wide range of
variations or noise effect.
 The size of the training set is related to the
number of hidden neurons
Applications Areas
 Function approximation
 including time series prediction and modeling.
 Classification
 including patterns and sequences recognition, novelty
detection and sequential decision making.
 (radar systems, face identification, handwritten text recognition)

 Data processing
 including filtering, clustering blinds source separation and
compression.
 (data mining, e-mail Spam filtering)
Advantages / Disadvantages
 Advantages
 Adapt to unknown situations
 Powerful, it can model complex functions.
 Ease of use, learns by example, and very little user
domain‐specific expertise needed
 Disadvantages
 Forgets
 Not exact
 Large complexity of the network structure
Conclusion
 Artificial Neural Networks are an imitation of the biological
neural networks, but much simpler ones.
 The computing would have a lot to gain from neural networks.
Their ability to learn by example makes them very flexible and
powerful furthermore there is need to device an algorithm in
order to perform a specific task.
Conclusion
 Neural networks also contributes to area of research
such a neurology and psychology. They are regularly
used to model parts of living organizations and to
investigate the internal mechanisms of the brain.
 Many factors affect the performance of ANNs, such
as the transfer functions, size of training sample,
network topology, weights adjusting algorithm, …
Neural network mathematics

Output
Inputs

y11  f ( x1 , w11 )  y11  2


 1  y1  f ( y 1 , w12 )  y32 
y 12  f ( x2 , w12 ) y1   y 2  2  2
 y 2  f ( y , w2 )
1 2
y 2
  y  y  f ( y 2
, w 3
1)
1 3 Out
y31  f ( x3 , w31 )  y3  2  2 
 y1  y3  f ( y , w3 )  y3 
1 2

y 14  f ( x4 , w14 )  4
Summary

• Artificial neural networks are inspired by the learning


processes that take place in biological systems.
• Artificial neurons and neural networks try to imitate the
working mechanisms of their biological counterparts.
• Learning can be perceived as an optimisation process.
• Biological neural learning happens by the modification
of the synaptic strength. Artificial neural networks learn
in the same way.
• The synapse strength modification rules for artificial
neural networks can be derived by applying mathematical
optimisation methods.
Summary
• Learning tasks of artificial neural networks can be
reformulated as function approximation tasks.
• Neural networks can be considered as nonlinear function
approximating tools (i.e., linear combinations of nonlinear
basis functions), where the parameters of the networks
should be found by applying optimisation methods.
• The optimisation is done with respect to the approximation
error measure.
• In general it is enough to have a single hidden layer neural
network (MLP, RBF or other) to learn the approximation of
a nonlinear function. In such cases general optimisation can
be applied to find the change rules for the synaptic weights.

You might also like