4.2 Ann

Download as pdf or txt
Download as pdf or txt
You are on page 1of 26

Chap 4 Part 2: Artificial Neural

Networks
Machine Learning CS-603

Instructor
Dr. Sanjay Chatterji
Introduction

● Artificial neural networks (ANNs) provide a


general, practical method for learning
real-valued, discrete-valued, and vector-valued
functions from examples.
● ANN learning is robust to errors in the training
data and has been successfully applied to
problems such as interpreting visual scenes,
speech recognition, and learning robot control
strategies.
Biological Motivation
● The study of artificial neural networks (ANNs) has
been inspired in part by the observation that biological
learning systems are built of very complex webs of
interconnected neurons.
● Artificial neural networks are built out of a densely
interconnected set of simple units, where each unit
takes a number of real-valued inputs and produces a
single real-valued output (which may become the input
to many other units).
● The human brain is estimated to contain a densely
interconnected network of approximately 1011 neurons,
each connected, on average, to 104 others.
ALVINN
• ALVINN: A learned ANN to steer an autonomous
vehicle
• The input is a 30 x 32 grid of pixel intensities.
• The output is the direction in which the vehicle is
steered.
• ALVINN has successfully
drive at 70 miles per hour
for distances of 90 miles
on public highways
• Node corresponds to network unit
• Lines corresponds to its inputs.
• Output of "hidden“ units
available only within the network
not available as global network output.
Properties of Artificial Neural Networks
● Unit s: Large number of neuron-like processing elements
● Large number of weighted directed connections b/w pairs
of units
● Local processing in each unit is a function based on the
outputs of a limited number of other units in the network
● Each unit computes a simple function of its input values,
which are the weighted outputs from other units.
− If there are n inputs to a unit, then it's output (activation)
is defined by a = g((w1 * x1) + (w2 * x2) + ... + (wn * xn)).
− It computes function of the linear combination of inputs.
Perceptron

● o(x0 ,...,xn)= 1 if Σi=0n wixi >0


-1 otherwise
Perceptron-Learning

● Learning a perceptron involves choosing values


for weights w0, ..., wn.
● The space H of candidate hypotheses
considered in perceptron learning is the set of
all possible real-valued weight vectors
Representational Power of Perceptrons

● A perceptron represents a hyperplane decision


surface in the n-dimensional space of
instances.
● The perceptron outputs 1 for instances lying on
one side of the hyperplane and outputs -1 for
instances lying on the other side.
● The equation for this decision hyperplane is
w.x= 0
Representational Power of Perceptrons
● Some sets of positive and negative examples
cannot be separated by any hyperplane. Those
that can be separated are called linearly
separable sets of examples.
● A single perceptron can be used to represent
many boolean functions.
● AND, OR, NAND, NOR are representable by a
perceptron
− XOR cannot be representable by a perceptron.
Multi-Layer Networkswith Linear
Units
● Ex. XOR
Perceptron Training Rule
● wi= wi+ Δwi
● Δwi= η(t -o) xi
− t is the target value
− o is the perceptron output
− η is a small constant (e.g. 0.1) called learning rate
● Will it converge?
● If the output is correct (t=o) the weights wi are
not changed.
● If the output is incorrect (t≠o) the weights wi are
changed.
Comparison Perceptron and Gradient Descent Rule
Multi-Layer Networks with Non-Linear Units

● Sigmoid Unit
Incremental (Stochastic)Gradient Descent
● Incremental Gradient Descent can approximate Batch
Gradient Descent arbitrarily closely if η is small
enough.
● Batch mode:
− w = w - ηΔED[w] over the entire data D
− ED[w]=1/2Σd∈D(td-od)2

● Incremental mode:
− w=w-ηΔEd[w] over individual training examples d
− Ed[w]=1/2 (td-od)2
Gradient Descent
•The key idea behind the delta rule is to use gradient descent to search
the hypothesis space of possible weight vectors to find the weights that
best fit the training examples.
• Gradient descent provides the basis for the BACKPROPAGATION
algorithm which learns networks with many interconnected units.
• Applied when
1. hypothesis space contains continuously parameterized hypotheses
2. error can be differentiated w.r.t. these hypothesis parameters.
• Difficulties in applying gradient descent
1. converging to a local minimum can sometimes be quite slow
2. if there are multiple local minima in the error surface, then there is
no guarantee to find the global minimum.
Gradient Descent(training_examples, η)
● Training example: <(x1,...xn),t> η: learning rate
● Initialize each wi to some small random value
● Until the termination condition is met, Do
− Initialize each Δwi to zero
− For each <(x1,...xn),t> in training_examples,Do
● Input (x1,...,xn) to the linear unit and compute o
● For each linear unit weight wi Do
− Δwi= Δwi+ η(t-o) xi
− For each linear unit weight wi, Do
● wi=wi+Δwi
Sigmoid Unit

● Derive gradient descent rules to train:


− one sigmoid function
● 𝛿E/𝛿wi = -Σd(td-od) od (1-od) xid

● Multilayer networks of sigmoid units →


backpropagation
Backpropagation Algorithm
Adv. And Disadv. in NN Learning
● Instances are represented by many attribute-value pairs.
● The target function output may be discrete-valued,
real-valued, or a vector of several real-valued or
discrete-valued attributes.
● The training examples may contain errors.
● Long training times are acceptable.
● Fast evaluation of the learned target function may be
required.
● The ability of humans to understand the learned target
function is not important.
Expressive Capabilities of ANNs
● Boolean functions
− Every boolean function can be represented by
network with single hidden layer
− but might require exponential (in number of
inputs)hidden units
● Continuous functions
− Every bounded continuous function can be
approximated with arbitrarily small error by network
with one hidden layer
● Any function can be approximated to arbitrary
accuracy by a network with two hidden layers
Hypothesis space search and Inductive Bias

● Every possible assignment of network weights


represents a syntactically distinct hypothesis.
− Continuous Hypothesis Space
− E is differentiable wrt. continuous parameters.
● Inductive Bias of Backpropagation
− Difficult to characterize precisely
− Smooth interpolation between data points
Ovrefitting

● Appropriate condition for terminating weight


update rule
− Until the error falls below a threshold
− Overfitting
● tend to occur during later iterations: Weight decay
● K-fold cross validation
Deep Neural Networks
Convolutional Neural Networks
● It models spatial invariance information
● Weight sharing and local connectivity
− Use multiple filters convolve over inputs with
different stride
− Pooling(or subsampling)
Recurrent Neural Networks
● Models temporal information
● Hidden states as a function of inputs and
previous time step information
Thank You

You might also like