Deep Learning
Deep Learning
Deep Learning
➢ Deep Learning is a subset of Machine Learning and is used to extract useful patterns from
data.
➢ The only difference between Machine Learning and Deep Learning are,
✓ Machine Learning models need human intervention to arrive at the optimal
outcome.
✓ Deep Learning models make predictions independent of human intervention.
➢ Deep learning can be classified into different types based on the architecture of the model,
such as feedforward neural networks (ANN), convolutional neural networks (CNN), recurrent
neural networks (RNN)
➢ Some of the applications of Deep Learning are...
✓ Self-Driving
✓ Gaming-Pokeman
✓ Machine Translation
✓ Computer Vision
What is Perceptron?
--->A Perceptron was a form of neural network introduced in 1958 by Frank Rosenblatt.
------------------------------------------------------------------------------------------------------------------------
--->The Element of neural network are Input Layer,Hidden Layer and Output Layer.
--->Each neuron is receiving input signal processing that input signal with the help of
activation function.
------------------------------------------------------------------------------------------------------------------------
Input Layer:
--->Input layer accept input features and provides the information from outside world to neural
network.
Hidden Layer:
--->Hidden layer performs all sorts of computation on the features entered through input layers.
Output Layer:
--->Its bring up the information learned by the neural network to outer world.
------------------------------------------------------------------------------------------------------------------------
--->Activation function calculates "weighted sum" of its input,add a bias and then decides whether a
neuron should
be activated or not.
1.Linear Function
2.Sigmoid Function
3.Tanh Function
4.ReLu Function
5.Softmax Function
Sigmoid: This function maps any input value to a value between 0 and 1, and
ReLU (Rectified Linear Unit): This function maps any negative input value to zero
and keeps any positive input value unchanged. This activation function is widely
used in feed-forward neural networks, especially in the convolutional neural networks (CNN) and
deep learning.
Tanh (Hyperbolic Tangent): This function maps any input value to a value between
-1 and 1 and is similar to the sigmoid function, but centers the data around zero.
Leaky ReLU: A variant of ReLU function that allows small negative values to pass
Softmax: This function is used to produce a probability distribution over the output classes.
ELU (Exponential Linear Unit): This function is similar to ReLU but it assigns negative
values to a small negative slope.
Swish: A new activation function introduced in 2017, it is similar to sigmoid but it can be
------------------------------------------------------------------------------------------------------------------------
Equation :y = ax
--->No matter how many layers we have, if all are linear in nature, the final activation function of last
layer is
--->We can used linear function at one place,that is the output layer.
------------------------------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------------------------------
--->The activation that works almost always better than sigmoid function is Tanh function also knows
as
--->Its is used in hidden layers of a neural network as it’s values lies between -1 to 1.
------------------------------------------------------------------------------------------------------------------------
--->RELU Stands for Rectified linear unit. It is the most widely used activation function.
--->Chiefly implemented in hidden layers of Neural network.
--->We can easily backpropagate errors and multiple layers of neuron being activated by the ReLu
Function
--->ReLu is less computationally expensive than tanh and sigmoid function and it activate only few
neurons at a time.
------------------------------------------------------------------------------------------------------------------------
--->The softmax function is also a type of sigmoid function but is handy when we are trying to handle
classification problems.
OUTPUT:The softmax function is ideally used in the output layer of the classifier where we are
actually trying to
------------------------------------------------------------------------------------------------------------------------
--->The Cost function is used to measure the performance of machine learning model for a given
data.
--->Its quantifies the error between predicted values and expected values,it present results in the
form of a single
real numbers.
------------------------------------------------------------------------------------------------------------------------
1.Activation Function
2.Cost Function
------------------------------------------------------------------------------------------------------------------------
--->The Gradient Descent is an optimization algorithm that is used for minimizing the cost
function(Errors).
--->Its updates the various parameters of a machine learning model to minimize the cost function.
------------------------------------------------------------------------------------------------------------------------
What is Backpropagation?
--->Backpropagation is used to calculate the error contribution of each neuron after a batch of data is
processed.
--->Its calculates error at output and the distributes that output back throughout the network layers.
------------------------------------------------------------------------------------------------------------------------
What is DropOut?
--->Dropout is a regularization technique in with randomly selected neurons are ignored during the
training process,
that means these ignored neurons are not considered during a particular forward or backward
propagation.
------------------------------------------------------------------------------------------------------------------------
what is Optimizer?
------------------------------------------------------------------------------------------------------------------------
What is Epochs?
--->An Epoch refers to one complete pass through the entire dataset during the training process,
in which the model updates its parameters to minimize the loss function.
------------------------------------------------------------------------------------------------------------------------
--->When a neural network contains more than one hidden layer is called Deep Neural Network.