Unit 4
Unit 4
Unit 4
Introduction
• Deep learning has gained many achievements over
the past few years, right from defeating professionals
in poker games to autonomous driving.
• Accomplishing such tasks needs a complex
methodology which will result in a complex system.
• Even now, there are many cases where the researchers
are applying a trial-and-error method to build certain
models.
• ParaDnn, a parameterised deep learning benchmark
suite is introduced by the researchers of Harvard
University.
• This suite generates end-to-end models for fully
connected (FC), convolutional neural network
(CNN), and recurrent neural networks (RNN).
• In ParaDnn, the model types cover 95% of Google’s
TPU workloads, all of Facebook’s deep learning
models, and eight out of nine MLPerf models.
• The image classification/detection and sentiment
analysis models are the CNNs, the recommendation
and translation models are FCs, the RNN translator
and another version of sentiment analysis are the
RNNs.
• In addition to ParaDnn, the researchers also included
two workloads written in TensorFlow from MLPerf
and they are transformer and ResNet-50.
TPU vs GPU vs CPU: A Cross-Platform
Comparison
• The researchers made a cross-platform comparison in
order to choose the most suitable platform based on
models of interest. This can also be said as the key
takeaways which shows that no single platform is the
best for all scenarios. They are mentioned below.
1.Rescale Data
• When our data consists of attributes with different
scales mainly ML algorithm can be benefited from
rescaling attributes.
• It means that all attributes of dataset have same
scale so that measuring parameter of dataset
maintains uniformity.
• This is also used for an optimization algorithm to
maintain uniformity of data set.
2.Binarize data
• Binarization is process that is used to transform data
features of any entity into binary numbers. It is done
to classify algorithms more efficiently.
• To convert into binary, we can transform data using
binary threshold. All value above threshold is marked
as 1 and all values that are equal to or below
threshold are marked as 0.
•This is called binarizing your data. It can be helpful
when you have value that you want to make Crip value.
3.Data Augmentation
Objective
• Learn how to improve the neural network
with the process of Batch Normalization.
• Understand the advantages batch
normalization offers.
What is Batch Normalization?
8.7.1
• Normalization is a data pre-processing tool used to bring the
numerical data to a common scale without distorting its shape.
• Generally, when we input the data to a machine or deep learning
algorithm we tend to change the values to a balanced scale.
• The reason we normalize is partly to ensure that our model can
generalize appropriately.
• Now coming back to Batch normalization, it is a process to make
neural networks faster and more stable through adding extra
layers in a deep neural network.
• The new layer performs the standardizing and normalizing
operations on the input of a layer coming from a previous layer.
Deep Learning using Transfer Learning
• VGG-16
• VGG-19
• Inception V3
• ResNet-50
• Xception
Hyper-parameter Tuning Techniques in
Deep Learning