Homework #2 Solutions: Prayag Neural Networks and Learning Systems-I March 4, 2019

Download as pdf or txt
Download as pdf or txt
You are on page 1of 4

Homework #2 solutions

Prayag
Neural networks and learning systems-I
March 4, 2019

Problem 4.3.

Solution. Let α = −β 2 for some 0 < β ≤ 1. Substituting this in equation ∆wji (n) =
n
α(n−t) ∂w∂E
P
−η ji (t)
we get
t=0

n
X (n−t) ∂E
∆wji (n) = −η −β 2 (1)
t=0
∂wji (t)

(n−t)
The term (−β 2 ) alternates in sign in consecutive iterations. Now there are two cases, if
the sign of the gradient of E remains same (+ or −) equation (1) results in slower convergence.
If the sign of the gradient alternates, equation (1) results in large positive or negative values
resulting faster convergence. 

Problem 4.4.

Solution. With 0 < α ≤ 1 we know that


n
X ∂E
∆wji (n) = −η α(n−t) . (2)
t=0
∂wji (t)

Now by differentiating E(w) = k1 (w − w0 )2 + k2 with respect to w we get


n
X
∆wji (n) = −η2k1 α(n−t) (w − w0 ) . (3)
t=0

Now depending on the sign of the term (w − w0 ), one can comment on the convergence of
the update equation. 

Problem 4.19.

1
(a) (b)

(c)

Figure 1

Solution. Figure 1a shows the sequence generated by the solving the Lorenz dynamical sys-
tem. The network parameters are as follows: Number of layers = 1, number of hidden
neurons=200, number of epochs=1500, and activation function = ReLu. Figure 1b shows
the predicted sequence along with the original sequence. Figure 1c shows the loss during
training with epochs. 

Problem 3.

Solution. (a) The parameters used in the experiment are as follows: D = 1, activation
function is sigmoid, number of hidden neurons=10, and the maximum number of iter-
ations = 50000. The decision boundary is shown in Figure 2a.

(b) Experiments with different activation functions.

• ReLU: D = 1, number of hidden neurons=30, and the maximum number of


iterations = 50000. The decision boundary is shown in Figure 2b.
• tanh(.): D = 1, number of layers = 2, number of hidden neurons in layer 1 = 20,
number of hidden neurons in layer 2=10, and the maximum number of iterations
= 5000. The decision boundary is shown in Figure 2c.

(c) The value of D is varied from 0.1 to 0.9 in steps of 0.1. The number of hidden neurons is
also noted and Figure 3a shows the variation of number of hidden neurons with respect
to D. The decision boundaries corresponding to different D are shown in Figures 3b-3j.
An important observation is that as the value of D decreases the decision boundary

2
(a) (b)

(c)

Figure 2

should have a large bend (curvature) indicating increasing complexity of the decision
boundary. This results in more number of hidden neurons in the hidden layer.


3
(a) (b)

(c) (d)

(e) (f)

(g) (h)

(i) (j)

Figure 3

You might also like