NN Lab2

Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

Neural Networks Lab 2

The Multi-Layer Perceptron

Introduction

In this lab we will consider the Multi-Layer Perceptron, or MLP. The MLP is a feed-forward neural
network that has numerous applications. It can be found in image processing, pattern recognition and
stock market predictions, just to name a few. During the 1980s it was proven that the MLP with
just one hidden layer can be trained such that it can approximate an arbitrary function with arbitrary
precision [1]. Therefore, in essence it is not necessary to implement multiple hidden layers, but it can
be considerably more efficient to do so, because multiple layers are able to compactly represent a high
degree of non-linearity [2].
The training of neural networks with hidden neurons has been an issue for years because of the
credit assignment problem. In 1986 Rumelhart et al. described the backpropagation-algorithm [3]. This
algorithm used the error of the previous layer to estimate the error in the current layer and it is still
widely applied to train MLPs. This lab assignment will not be an exception.
A significant limitation of the TLU of last week is that the classification of points in the input space
has to be linear separable. Fortunately, the MLP can deal with non-linearity.

Aim of this assignment

The aim is to get familiar with the MLP and the backpropagation-algorithm. We will learn how it can
be implemented and how it can be used to solve classification problems and to approximate functions.
Network pruning will be covered in the bonus exercise.

Theory questions (1.5 pt.)


a) What is the credit assignment problem?
b) What are the two factors that determine the error of a neuron in a hidden layer?
c) Give the definition of the sigmoid function. Also write down the derivative.
d) Why is it dubious to initialize the weights with very high values?
e) Describe three criteria that you can use to determine when to stop learning. Provide one disadvantage per criterion.
f) How can you speed up the learning of a network besides increasing the learning rate? Explain.
g) How can you verify that a network is generalizing for some training data?
h) Describe the problem of overfitting.
i) Explain what is network pruning and describe at least two ways of accomplishing this.

An MLP on paper (1 pt.)


a) Draw a two-layer neural network with 2 input neurons, 2 hidden neurons and 1 output neuron.
b) Why is this considered a two-layer network instead of a 3-layered network?
c) How many weights does the network contain?
d) Which values would you want to keep track of when training an MLP?

Implementing an MLP in Matlab (3.5 pt.)

Now we are going to implement an MLP in Matlab. On Nestor you can find the files mlp 2011.m,
sigmoid.m and d sigmoid.m and output function.m. Download and save the files. Start up Matlab
and make sure the files are in your working directory.
Now we are going to build an MLP step-by-step that will be able to learn the XOR function using
the backpropagation algorithm.

5.1

Computing the activation (forward pass)

Take a look at the code. You will see a number of declared variables that will be used later on. The
weight matrices Whidden R(nin nhidden ) and Woutput R(nhidden noutput ) are initialized with random
values. The range and average of these values can be set manually.
In the matrix Whidden R(nin nhidden ) each row corresponds to all connections of one input neuron
to the hidden neurons. Each column corresponds to all connections that go to one hidden neuron.
In the matrix Woutput R(nhidden noutput ) each row corresponds to all connections of one hidden
neuron to the output neurons. Each column corresponds to all connectinos that go to one output
neuron.
5.1.1

The hidden layer

The for-loop is executed for every row in the matrix input data. Compute the hidden activation
using matrix vector multiplication. You do not have to account for an activation function yet.
Hint Contrary to last week, we will now use the augmented input vector. The final input is now a
constant and so you do not have to subtract the threshold from the activation anymore. The final weight
will represent the threshold and the activation of a hidden node is simplified to determining an inner
product.
5.1.2

The Sigmoid function

The hidden layer uses a logistic function called a sigmoid. This function is implemented in sigmoid.m.
The sigmoid function is defined as follows:
(x) =
1 + exp

1


(x)

You can choose = 0 and = 1. Now the function simplifies to:


(x) =

1
1 + exp(x)

Make sure that the function is able to handle a vector ~x by performing an element-wise sigmoid
function on ~x. Test your sigmoid function in the command window:
x = -3:0.1:3;
y = sigmoid(x);
plot(x,y);
Now there should be a graph of a sigmoid curve.
2

5.1.3

The activation of the output

Now that we have the activation of the hidden layer we can compute the activation of the output layer
easily. Use an inner product here as well.
5.1.4

The ouput function

The output function is the function that the output neurons apply to their activation to compute the
output. This function has to be implemented in output function.m. Implement this function such that
it uses the sigmoid function (result = sigmoid(input)). This distinction seems pointless for now, but
later on we will implement it differently.
5.1.5

The backpropagation step

Now we know what the output is for the current input. Given an input pattern p and the output of i-th
output neuron yi , the error at the output is given by tp :
e(yip ) = tpi yip
Chapter 6 of the book covers the delta rules for neural networks. These are defined as follows:
wij = xi j
where is the learning rate, xi is the i-th input and j is the j-th local gradient.
For the output neurons the k-th local gradient can be found as follows:
ko = 0 (aok )(tpk ykp )
And for the hidden neurons:
kh = 0 (ahk )

jo wkj .

Use the expression above to compute delta output and delta hidden. Use the derivative of the
sigmoid d sigmoid and the derivative of the output function d output function (see section 5.1.6).
Proceed as follows:
i. Compute ~e

and assign it to output error.

ii. Compute ~o and assign it to a variable local gradient output.


iii. Compute ~e

and assign it to hidden error.

iv. Compute ~

and assign it to a variable local gradient hidden.

v. Compute the W for Whidden and for Woutput


Hints
1. Try to use Matlabs facilities for matrix multiplications.
2. Note that
kh = 0 (ahk )

jo wkj

(ahk )ehk

P
Where ehk = j jo wkj is the error of the k-th hidden neuron. So the vector with all errors in the
hidden layer can be found as follows:
 P o

P o
P o

~e h =
j j w1j
j j w2j
j j wN j
T
= ~o Woutput

3. To compute the matrix W consider the following:


wij = xi j
Hence we have

Wm,n =

x1 1
x2 1
..
.

x1 2
x2 2
..
.

..
.

x1 n
x2 n
..
.

xm 1

xm 2

xm n

T~

= ~x
5.1.6

The slope:

d
dx sigmoid(x)

In the file d sigmoid.m the derivative of the sigmoid function is implemented. The function only works if
you have properly defined the sigmoid function. Create a file named d output function and implement
the derivative of the output function.
Finally, you have to make sure that the weight matrices are updated. The program should work now.

5.2

Implementing the stop criterion

To make sure that we do not always have to wait until the maximum number of epochs has passed we
will implement a stop criterion. Change the program such that the while loop is terminated when the
error is lower than a certain value. Call this value min error and set it to 0.1.

Testing the MLP (1.5 pt.)

Set the learn rate to 0.2, the number of hidden neurons to 2 and the noise level to 5% = 0.05.
Answer the following questions:
a) Is it guaranteed that the network finds a solution. Why so?
Set the number of hidden neurons to 20.
b) About how many epochs are needed to find a solution?
c) Set the noise level to 0.5. Explain what happens.
d) Set the noise level to 0.2. Change the weight spread to 5. What can you observe? Explain
your results using the delta-rule.
e) Set the noise level to 1%. Leave the weight spread at 5. There are two qualitatively different
solutions to the XOR problem. What are these two? Include a figure of both solutions.
f) Which shape does the graph of the error usually have? Explain the shape with the use of the
delta-rule.

Another function (1.5 pt.)

As stated before, it is proven that an MLP can approximate an arbitrary function. This means that we
could make our MLP learn to approximate any function. The file mlp sinus.m is a lot like mlp 2011.m
and can be found on Nestor. The for-loop still needs to be implemented. Copy the body of the for-loop
from mlp 2011.m to mlp sinus.m. Do not forget to implement the stop-criterion.
This network has 1 output and 1 input. Given an input x, the network should give the output sin(x).
To learn a sine the network also needs to put out negative values. Therefore, we have to change the
output function. We will need a linear output function. Modify the code such that the output uses the
identity function (f (x) = x) for the activation. Make sure you use the derivative of the identity function
in the delta rule.
4

Set the learning rate to 0.05 and the maximum number of epochs to 5000. Use 20 neurons in the
hidden layer. The network will now attempt to learn a sine in the domain 0 x < 2. Answer the
following questions:
a) Is the network capable of learning the sine function?
b) Set n examples in the top of the file to 5. Rerun the simulation. What can you observe? With
which feature of neural networks does this phenomenon correspond?
c) Set plot bigger picture to true. How is the domain of the network determined? What happens
if the input is outside of this domain?
d) At least how many neurons are required to learn a sine?
e) You have modified the output function for this part of the lab assignment. Does the XOR learning
network still work? Why so?

Bonus: Network pruning

Copy the file mlp sinus.m and name it mlp sinus pruning.m. Train this network to approximate a
sine with 50 hidden neurons. Set n examples to 20. Train the network until the error drops below 0.01.
Obviously, we could do with less than 50 neurons. Lets prune the network:
1. Set the incoming and outgoing weights of a single neuron to 0. Make a backup of the old weights.
2. Compute the mean squared error of the network over all example inputs.
3. Repeat 1 and 2 for all neurons in the hidden layer. The neuron that is least significant can be
pruned by definitively setting it to 0. The least significant neuron is the neuron that minimizes the
increase of the error.
4. Repeat 1 to 3 as long as the error stays under 0.1.
Answer the following questions:
a) How many neurons are required to approximate when pruning a network of 50 hidden neurons?
b) Compare your answer with your answer to (d) in section 7. What can you say about brute force
pruning?

What to hand in

Write a report that adheres to the guidelines on Nestor. Include all of your code and provide meaningful
comments to the code.

References
[1] K. Hornik, M. Stinchcombe, and H. White, Multilayer feedforward networks are universal approximators, Neural Networks, vol. 2, no. 5, pp. 359366, 1989.
[2] H. Larochelle, Y. Bengio, J. Louradour, and P. Lamblin, Exploring strategies for training deep
neural networks, The Journal of Machine Learning Research, vol. 10, pp. 140, 2009.
[3] D. Rumelhart, G. Hinton, and R. Williams, Learning representations by back-propagating errors,
Nature, vol. 323, no. 6088, pp. 533536, 1986.

You might also like