Intro To Deep Learning
Intro To Deep Learning
Intro To Deep Learning
July 2015
CUDA for
Deep Learning
Feature extraction
Classifier/
detector
Result
SVM,
shallow neural net,
HMM,
shallow neural net,
Clustering, HMM,
LDA, LSA
Speaker ID,
speech transcription,
Topic classification,
machine translation,
sentiment analysis
6
Errors
Dog
Dog
Cat
Raccoon
Cat
Honey badger
Deploy:
Dog
Artificial neuron
y
w1
x1
w2
w3
x2
x3
y=F(w1x1+w2x2+w3x3)
F(x)=max(0,x)
Input layer
Output layer
Given sufficient training data an artificial neural network can approximate very complex
functions mapping raw data to output decisions
10
Low-level features
Mid-level features
High-level features
Application components:
Input
Result
Task objective
e.g. Identify face
Training data
10-100M images
Network architecture
~10 layers
1B parameters
Learning algorithm
~30 Exaflops
~30 GPU days
11
Generalizable
The same neural net approach can be used for many different applications
and data types
Scalable
Performance improves with more data, method is massively parallelizable
12
14
15
16
New DL Techniques
GPU acceleration
350 millions
images uploaded
per day
2.5 Petabytes of
customer data
hourly
17
18
GPU Entries
120
100
110
80
60
Hosted by
60
40
20
0
2010
2011
2012
2013
2014
person
car
bird
helmet
frog
motorcycle
person
dog
chair
person
hammer
flower pot
power drill
19
20
STANFORD AI LAB
ICML 2013
600 kWatts
$5,000,000
3 GPU-Accelerated Servers
12 GPUs 18,432 cores
4 kWatts
$33,000
21
GPUs
Inherently
Parallel
Matrix
Operations
FLOPS
Bandwidth
GPU ACCELERATION
Training Time
GPU
GPU
Speed Up
64 images
64 s
7.5 s
8.5X
128 images
124 s
14.5 s
8.5X
256 images
257 s
28.5 s
9.0X
Batch Size
7 layers
23
DL software landscape
24
Image
Language
END USER APPLICATIONS
Analysis
Processing
System Software(Drivers)
Image
Language
Analysis
Processing
END USER APPLICATIONS
DIGITS
accelerated
DL Frameworks
(Caffe,or
Torch,
Theano)
Deep GPU
Learning
Frameworks(Industry
standard
research
frameworks)
Libraries(Key
used building
blocks)
Performancecompute
librariesintensive
(cuDNN, commonly
cuBLAS)- Highly
optimized
System
Software(Drivers)
CUDA- Best
Parallel
Programming Toolkit
HardwareGPU Which
Worlds
can accelerate
best DL Hardware
DL building blocks
26
GPU-ACCELERATED
DEEP LEARNING FRAMEWORKS
CAFFE
TORCH
THEANO
KALDI
Domain
Deep Learning
Framework
Scientific Computing
Framework
Math Expression
Compiler
Speech Recognition
Toolkit
cuDNN
2.0
2.0
2.0
--
Multi-GPU
via DIGITS 2
In Progress
In Progress
(nnet2)
Multi-CPU
(nnet2)
License
BSD-2
GPL
BSD
Apache 2.0
Interface(s)
Command line,
Python, MATLAB
Lua, Python,
MATLAB
Python
Embedded (TK1)
http://developer.nvidia.com/deeplearning
All three frameworks covered in the associated Intro to DL hands-on lab
27
CUDNN V2 - PERFORMANCE
v3 coming soon
CPU is 16 core Haswell E5-2698 at 2.3 GHz, with 3.6 GHz Turbo
GPU is NVIDIA Titan X
28
Compute-Intensive Functions
GPU
Rest of Sequential
CPU Code
5% of Code
~ 80% of run-time
CPU
29
CUDNN ROUTINES
Convolutions 80-90% of the execution time
Pooling - Spatial smoothing
https://developer.nvidia.com/cudnn
30
DIGITS
Interactive Deep Learning GPU Training System
Data Scientists & Researchers:
Quickly design the best deep neural
network (DNN) for your data
Visually monitor DNN training quality in
real-time
Manage training of many DNNs in
parallel on multi-GPU systems
DL deployment
32
33
HANDS-ON LAB
1. Create an account at nvidia.qwiklab.com
2. Go to Introduction to Deep Learning lab at bit.ly/dlnvlab1
3. Start the lab and enjoy!
35