NN Lecture1 Introduction
NN Lecture1 Introduction
NN Lecture1 Introduction
lion
bird
What is that?
Neural networks to the rescue…
3
Definition of ANN
“Data processing system consisting of a
large number of simple, highly
interconnected processing elements
(artificial neurons) in an architecture inspired
by the structure of the cerebral cortex of the
brain”
4
Neural Networks
What is a Neural Network?
•Biologically motivated approach to
machine learning
6
Similarity with Biological Network
Biological neuron
Biological Neural Networks
An artificial neuron
12
Artificial Neurons
Four basic components of a human biological The components of a basic artificial neuron
neuron
14
Model Of A Neuron
Wa
X1
Wb Y
X2 f()
Wc
X3
16
• Each neuron has an internal state, called
its activation or activity level, which is a
function of the inputs it has received.
Typically, a neuron sends its activation as
a signal to several other neurons.
18
Artificial Neural Network
Synapse Nukleus
x1 w1
y
Axon
x2 w2 Activation Function:
yin = x1w1 + x2w2 (y-in) = 1 if y-in >=
and (y-in) = 0
Dendrite
-A neuron receives input, determines the strength or the weight of the input, calculates the total
weighted input, and compares the total weighted with a value (threshold)
- If the total weighted input greater than or equal the threshold value, the neuron will produce the
output, and if the total weighted input less than the threshold value, no output will be produced
19
One Neuron as
a Network
• Here x1 and x2 are normalized attribute value of data.
• x1 and x2 values multiplied by weight values w1 and w2 are input to the neuron x.
• Given that
•
One Neuron as a Network
•
Bias of a Neuron
x1-x2= -1
x2 x1-x2=0
x1-x2= 1
x1
Neuron with Activation
• The neuron is the basic information processing unit of a
NN. It consists of:
• 1949 Hebb published his book The Organization of Behavior, in which the Hebbian
learning rule was proposed.
• 1958 Rosenblatt introduced the simple single layer networks now called Perceptrons.
• 1969 Minsky and Papert’s book Perceptrons demonstrated the limitation of single layer
perceptrons, and almost the whole field went into hibernation.
• 1982 Kohonen developed the Self-Organizing Maps that now bear his name.
• 1986 The Back-Propagation learning algorithm for Multi-Layer Perceptrons was re-
discovered and the whole field took off again.
• 2000s The power of Ensembles of Neural Networks and Support Vector Machines
becomes apparent.
• ……………………………………..
24
Characterization
• Architecture
– a pattern of connections between neurons
• Single Layer Feedforward
• Multilayer Feedforward
• Recurrent
• Strategy / Learning Algorithm
– a method of determining the connection weights
• Supervised
• Unsupervised
• Reinforcement
• Activation Function
– Function to compute output signal from input signal
25
Single Layer Feedforward NN
x1 w11
w12 ym
w21
x2 yn
w22
output layer
Input layer
x2 z2
y2
zn
xm Vmn
Input layer Output layer
Hidden layer
Contoh: CCN, GRNN, MADALINE, MLFF with BP, Neocognitron, RBF, RCE
27
Recurrent NN
Input Outputs
Hidden nodes
29
Unsupervised Learning
30
Reinforcement Learning
31
Activation Functions
• Identity
f(x) = x
• Binary step
f(x) = 1 if x >=
f(x) = 0 otherwise
• Binary sigmoid
f(x) = 1 / (1 + e-x)
• Bipolar sigmoid
f(x) = -1 + 2 / (1 + ex)
• Hyperbolic tangent
f(x) = (ex – e-x) / (ex + e-x)
32
x1 w1= 0.5
y
x2 w2 = 0.3
Activation Function:
yin = x1w1 + x2w2 Binary Step Function
= 0.5,
33
Where can neural network systems help…
• when we can't formulate an algorithmic
solution.
• when we can get lots of examples of the
behavior we require.
‘learning from experience’
• when we need to pick out the structure
from existing data.
34
Who is interested?...
• Electrical Engineers – signal processing,
control theory
• Computer Engineers – robotics
• Computer Scientists – artificial
intelligence, pattern recognition
• Mathematicians – modelling tool when
explicit relationships are unknown
35
Problem Domains
• Storing and recalling patterns
• Classifying patterns
• Mapping inputs onto outputs
• Grouping similar patterns
• Finding solutions to constrained
optimization problems
36
Applications of ANNs
• Signal processing
• Pattern recognition, e.g. handwritten
characters or face identification.
• Diagnosis or mapping symptoms to a
medical case.
• Speech recognition
• Human Emotion Detection
• Educational Loan Forecasting
37
Advantages Of NN
NON-LINEARITY
It can model non-linear systems
INPUT-OUTPUT MAPPING
It can derive a relationship between a set of input & output
responses
ADAPTIVITY
The ability to learn allows the network to adapt to changes in
the surrounding environment
EVIDENTIAL RESPONSE
It can provide a confidence level to a given solution
38
Advantages Of NN
CONTEXTUAL INFORMATION
Knowledge is presented by the structure of the network.
Every neuron in the network is potentially affected by the
global activity of all other neurons in the network.
Consequently, contextual information is dealt with naturally
in the network.
FAULT TOLERANCE
Distributed nature of the NN gives it fault tolerant capabilities
NEUROBIOLOGY ANALOGY
Models the architecture of the brain
39
Comparison of ANN with conventional AI methods
40