Back Propagation Neural Network 1: Lili Ayu Wulandhari PH.D

Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

Back Propagation Neural

Network 1

Lili Ayu Wulandhari Ph.D.


Learning Method

In learning process data is divided into two, namely


1. Data for training (in-sample)
2. Data for validation and testing (out-sample)
In order to obtain minimum error, training process is
carried out, which can be done by:
• Batch learning
Whole In-sample data are used in each iteration.
• Mini-Batch learning
In-sample data is divided into smaller group to be used
in each iteration.
• Online learning (Stochastic)
In sample data is used one-by-one in each iteration
which is chosen randomly.
Backpropagation Neural
Network Algorithm

Where:
b11 b21 xi : The ith Input Neuron
yj : The jth Hidden Neuron
w11 zk : The kth Output Neuron
x1 y1 v11
w12 wij : Weights of the the ith Input Neuron
v1m z1 to the jth Hidden Neuron
w1l
w21 v21 vjk : Weights of the the jth Hidden
x2 y2 .
w22 Neuron to the kth Output Neuron
. . v2m . b : Bias
w2l
. wn1 . vl1 .
wn2 vlm zm
. .
xn yl
wnl
Input Layer Hidden Layer Output Layer
Backpropagation Neural
Network Algorithm
Backpropagation Neural
Network Algorithm
Backpropagation Neural
Network Algorithm
Backpropagation Neural
Network Algorithm
References
• Simon S Haykin, Neural networks and learning machines,
volume 3. Pearson Education Upper Saddle River, 2009.
• Sandhya Samarasinghe. Neural networks for applied
sciences and engineering: from fundamentals to complex
pattern recognition. CRC Press, 2006.
• Bishop, Christopher M. Neural networks for pattern
recognition. Oxford university press, 1995.

You might also like