Homework #2 Solutions: Prayag Neural Networks and Learning Systems-I March 4, 2019
Homework #2 Solutions: Prayag Neural Networks and Learning Systems-I March 4, 2019
Homework #2 Solutions: Prayag Neural Networks and Learning Systems-I March 4, 2019
Prayag
Neural networks and learning systems-I
March 4, 2019
Problem 4.3.
Solution. Let α = −β 2 for some 0 < β ≤ 1. Substituting this in equation ∆wji (n) =
n
α(n−t) ∂w∂E
P
−η ji (t)
we get
t=0
n
X (n−t) ∂E
∆wji (n) = −η −β 2 (1)
t=0
∂wji (t)
(n−t)
The term (−β 2 ) alternates in sign in consecutive iterations. Now there are two cases, if
the sign of the gradient of E remains same (+ or −) equation (1) results in slower convergence.
If the sign of the gradient alternates, equation (1) results in large positive or negative values
resulting faster convergence.
Problem 4.4.
Now depending on the sign of the term (w − w0 ), one can comment on the convergence of
the update equation.
Problem 4.19.
1
(a) (b)
(c)
Figure 1
Solution. Figure 1a shows the sequence generated by the solving the Lorenz dynamical sys-
tem. The network parameters are as follows: Number of layers = 1, number of hidden
neurons=200, number of epochs=1500, and activation function = ReLu. Figure 1b shows
the predicted sequence along with the original sequence. Figure 1c shows the loss during
training with epochs.
Problem 3.
Solution. (a) The parameters used in the experiment are as follows: D = 1, activation
function is sigmoid, number of hidden neurons=10, and the maximum number of iter-
ations = 50000. The decision boundary is shown in Figure 2a.
(c) The value of D is varied from 0.1 to 0.9 in steps of 0.1. The number of hidden neurons is
also noted and Figure 3a shows the variation of number of hidden neurons with respect
to D. The decision boundaries corresponding to different D are shown in Figures 3b-3j.
An important observation is that as the value of D decreases the decision boundary
2
(a) (b)
(c)
Figure 2
should have a large bend (curvature) indicating increasing complexity of the decision
boundary. This results in more number of hidden neurons in the hidden layer.
3
(a) (b)
(c) (d)
(e) (f)
(g) (h)
(i) (j)
Figure 3