Different Classes of Abstract Models: - Supervised Learning (EX: Perceptron) Reinforcement Learning - Unsupervised Learning (EX: Hebb Rule)
Different Classes of Abstract Models: - Supervised Learning (EX: Perceptron) Reinforcement Learning - Unsupervised Learning (EX: Hebb Rule)
Different Classes of Abstract Models: - Supervised Learning (EX: Perceptron) Reinforcement Learning - Unsupervised Learning (EX: Hebb Rule)
Reinforcement learning
0 x 0
• Linear: O
i i i w0
w x
• Sigmoid: O
sig ( i i i w0 )
w x
THE PERCEPTRON:
(Classification)
x x 0
Threshold unit: O ( wi x w0 ) where ( x)
0 x 0
i
where o is the output for input pattern x,
Wi are the synaptic weights and y is the desired output
AND
x1 x2 y
o
1 1 1
1 0 0
w1 w2 w3 w4 w5
0 1 0
0 0 0
x1 x2 x 3 x4 x 5
AND
x1 x2 y
1
1 1 1
1 0 0 x1 x2 1.5 0
0 1 0
0 0 0
o 0 1
-1.5
1 1 Linearly seprable
x1 x2
OR
x1 x2 y 1
1 1 1
1 0 1 x1 x2 0.5 0
0 1 1
0 0 0
o 0 1
-0.5
1 1 Linearly separable
x1 x2
Perceptron learning rule:
o
w1 w2 w3 w4 w5
x2 x 3 x4 x 5
1. Show examples of Perceptron learning with demo program
dWi
xi y where xi are the inputs
dt
and y the output is assumed linear:
y W j x j
j
Results in 2D
Example of Hebb in 2D
2
=/3
m
w
1
0
x2
-1
-2
-2 -1 x1 0 1 2
On the board:
• Eigen-values, Eigen-vectors
In the simplest case, the change in synaptic weight w
is:
wi xi y
where x are input vectors and y is the neural response.
w dw
Qw
t dt
If <x>=0 , Q is the covariance matrix.
dWi
dt
xi y Wi y 2
Show PCA program:
dWi
dt
xi y Wi y 2
1
0.8
0.6
0.4
0.2
W1 0
-0.2
-0.4
-0.6
-0.8
-1
-1 -0.5 0 0.5 1
W2
Requires
• Bidirectional synaptic modification
LTP/LTD
• Sliding modification threshold LTD
LTP
• The fixed points depend on the
environment, and in a patterned
environment only selective fixed points
are stable.
The integral form of the average:
Is equivalent to this differential form:
with p>0
A N=2 example:
x1
What are the stable
x2 fixed points of m in
this case?
(Notation: )
Note:
Every time a new input
is presented, m
x1 changes, and so does
x2 θm
Alternative form:
•One dimension y w xT
•Quadratic form (c )
y y M x
dw
•Instantaneous limit dt
M y 2
y y y 2 x
dw
dt y
y 2 (1 y ) x
0 1 y
BCM Theory
Selectivity
•Two dimensions y w1x1 w2 x2 w xT
•Two patterns y1 w x1 , y 2 w x 2
y k y k M x k
•Quadratic form dw
•Averaged threshold dt
M E y2
patterns x2
2
k )
p (
k 1
y k 2
x2
dw
•Fixed points 0
dt
BCM Theory: Selectivity
y k y k M x k
•Learning Equation dw
dt
•Four possible fixed points
(unselective) y 1
0 , y2 0 w1 x1
(Selective) y1 M , y 2 0 x2
(Selective) y1 0 , y2 M
(unselective) y1 , y2
M M w2
•Threshold M p1 ( y1 )2 p2 ( y 2 )2 p1 ( y1 )2
y1 1 / p1
Summary
• When there are two linearly independent inputs, what will be the
BCM stable fixed points? What will θ be?
•When there are K independent inputs, what are the stable fixed
points? What will θ be?
Albert
Input desired output
o1 o oM
P input output pairs
N
2. Linear output: O x Wij
r
j
r
i
i 1
- Orthogonal inputs
Give examples
Might require other rules, Covariance, Perceptron
Formal neural networks can accomplish many tasks, for example: