NN Adaline
NN Adaline
NN Adaline
L. Manevitz
NNs Adaline
Plan of Lecture
Perceptron: Connected and Convex examples Adaline Square Error Gradient Calculate for and, xor Discuss limitations LMS algorithm: derivation.
NNs Adaline 2
NNs Adaline
LMS Derivation
Errsq = S (d(k) W x(k)) ** 2 Grad(errsq) = 2S(d(k) W x(k)) (-x(k)) W (new) = W(old) - m Grad(errsq) To ease calculations, use Err(k) in place of Errsq
Applications
Adaline has better convergence properties than Perceptron
NNs Adaline
1. Apply input to Adaline input 2. Find the square error of current input
Errsq(k) = (d(k) - W x(k))**2
3. Approximate Grad(ErrorSquare) by
differentiating Errsq approximating average Errsq by Errsq(k) obtain -2Errsq(k)x(k)
4.
5.
NNs Adaline
Convergence Phenomenom
LMS converges depending on choice of m.
NNs Adaline
Limitations
Linearly Separable How can we get around it?
Use network of neurons?
Use a transformation of data so that it is linearly separable
NNs Adaline
10
Learnability
Change Mc-P neurons to Sigmoid etc.
Derive backprop using chain rule. (Like LMS TheoremSample Feed forward Network (No loops)
NNs Adaline
11
Threshold
NNs Adaline
Sigmoid
12
Prediction
delay Input/Output NN
Compare
NNs Adaline
13
Output
Input
Wji Vik
F(S wji xj
NNs Adaline
14