Statistics Machine Learning Python Draft
Statistics Machine Learning Python Draft
Statistics Machine Learning Python Draft
in
Python
Release 0.3
beta
1 Introduction 1
1.1 Python ecosystem for data-science . . . . . . . . . . . . . . . . . . . . . . . 1
. . .
1.2 Introduction to Machine 5
Learning . . . . . . . . . . . . . . . . . . . . . . . . . .
1.3 Data analysis methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
. . .
2 Python language 9
2.1 Import libraries 9
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.2 Basic 9
operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.3 Data 10
types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.4 Execution control 16
statements . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.5 Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
. . . .
2.6 List comprehensions, iterators, etc. . . . . . . . . . . . . . . . . . . . . . . 20
. . . .
2.7 Regular expression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
. . . .
2.8 System 22
programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.9 Scripts and argument parsing . . . . . . . . . . . . . . . . . . . . . . . . . 28
. . . .
2.10 Networking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
. . . .
2.11 Modules and packages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
. . .
2.12 Object Oriented Programming (OOP) . . . . . . . . . . . . . . . . . . . . . . 30
. .
2.13 32
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
.
3 Scientific Python 33
3.1 Numpy: arrays and matrices . . . . . . . . . . . . . . . . . . . . . . . . . . 33
. . .
3.2 Pandas: data 42
manipulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.3 Matplotlib: data 54
visualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4 Statistics 69
4.1 Univariate 69
statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.8 Gradient
descent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
6 Deep Learning
243
6.1 Backpropagation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . 243
6.2 Multilayer Perceptron (MLP) . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . 257
6.3 Convolutional neural network . . . . . . . . . . . . . . . . . . . . . . . . . .
. . 276
6.4 Transfer Learning
Tutorial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304
ii
CHAPTER
ONE
INTRODUCTION
1. Python language
• Interpreted
• Garbage collector (do not prevent from memory leak)
• Dynamically-typed language (Java is statically typed)
2. Anaconda
Install additional packages. Those commands install qt back-end (Fix a temporary issue to
run spyder)
1
Statistics and Machine Learning in Python, Release 0.3 beta
List installed
packages
conda l i s t
Search available
packages
conda search pyqt
conda search scikit-learn
Environments
• A conda environment is a directory that contains a specific collection of conda packages
that you have installed.
• Control packages environment for a specific purpose: collaborating with someone
else, delivering an application to your client,
•Switch between environments
Miniconda
Anaconda without the collection of (>700) packages. With Miniconda you download only
the packages you want with the conda command: conda install PACKAGENAME
1. Download anaconda (Python 3.x) https://conda.io/miniconda.html
2. Install it, on Linux
bash Miniconda3-latest-Linux-x86_64.sh
4. Install required
packages
2 Chapter 1. Introduction
Statistics and Machine Learning in Python, Release 0.3
beta
1.1.3 Commands
Interactive
mode:
python
For
neuroimaging:
pip install -U --user nibabel
pip install -U --user nilearn
3 or 4 panels:
text editor help/variable explorer
ipython interpreter
Shortcuts: - F9 run
line/selection
1.1.4 Libraries
scipy.org: https://www.scipy.org/docs.html
Numpy: Basic numerical operation. Matrix operation plus some basic solvers.:
import numpy as np
X =np.array([[1, 2], [3, 4]])
#v =np.array([1, 2]).reshape((2, 1))
v =np.array([1, 2])
np.dot(X, v) # no broadcasting
X * v # broadcasting
np.dot(v, X)
X - X.mean(axis=0)
Matplotlib:
visualization:
import numpy as np
import matplotlib.pyplot as plt
#%matplotlib qt
x =np.linspace(0, 10, 50)
sinus =np.sin(x)
plt.plot(x, sinus)
plt.show()
4 Chapter 1. Introduction
Statistics and Machine Learning in Python, Release 0.3
beta
• Linear model.
• Non parametric statistics.
• Linear algebra: matrix operations, inversion, eigenvalues.
8 Chapter 1. Introduction
CHAPTER
TWO
PYTHO
N
LANGU
AGE
Note: Click here to download the full example code
# import a function
from math import sqrt
sqrt(25) # no longer have to reference the module
# define an alias
import numpy as np
# Numbers
10 + 4 # add (returns 14)
10 - 4 # subtract (returns 6)
10 * 4 # multiply (returns 40)
10 ** 4 # exponent (returns 10000)
10 / 4 # divide (returns 2 because both types are 'int')
10 / float(4) # divide (returns 2.5)
5%4 # modulo (returns 1) - also known as the remainder
9
Statistics and Machine Learning in Python, Release 0.3 beta
# Boolean operations
# comparisons (these return True)
5 >3
5 >= 3
5 != 3
5 == 5
2.3.1 Lists
Different objects categorized along a certain ordered sequence, lists are ordered, iterable,
mu- table (adding or removing objects changes the list size), can contain multiple data
types .. chunk-chap13-001
# create a l i s t
simpsons =['homer', 'marge',
'bart']
# examine a l i s t
simpsons[0] # print element 0 ('homer')
len(simpsons) # returns the length (3)
# find elements in a l i s t
simpsons.count('lisa') # counts the number of instances #
simpsons.index('itchy') returns index of f i r s t instance
# l i s t slicing [start:end:stride]
weekdays =['mon','tues','wed','thurs','fri']
weekdays[0] # element 0
weekdays[0: 3 # elements 0, 1, 2
] # elements 0, 1, 2
weekdays[:3] # elements 3, 4
weekdays[3:] # last element (element 4)
weekdays[-1] # every 2nd element (0, 2, 4)
weekdays[::2] # backwards (4, 3, 2, 1, 0)
weekdays[::-
1] alternative method for returning the l i s t backwards
#
list(reversed(weekdays))
# examine objects
id(num) ==id(same_num) # returns True
id(num) ==id(new_num) # returns False
num is same_num # returns True
num is new_num # returns
False
num == same_num # returns True
num == new_num # returns True
(their contents are equivalent)
# conatenate +, replicate *
[1, 2, 3] +[4, 5, 6]
[ "a " ] * 2 + ["b"] * 3
2.3.2 Tuples
Like lists, but their size cannot change: ordered, iterable, immutable, can contain multiple
data types
# create a tuple
digits = (0, 1, 'two') # create a tuple directly
digits = tuple([0, 1, 'two']) # create a tuple from a l i s t
zero = (0,) # trailing comma i s required
to indicate it's a tuple
# examine a tuple
digits[2] # returns 'two'
len(digits) # returns 3
digits.count(0) # counts the number of instances of that value (1)
digits.index(1) # returns the index of the f i r s t instance of that value (1)
# concatenate tuples
digits =digits +(3, 4)
# tuple unpacking
bart =('male', 10, 'simpson') # create a tuple
2.3.3 Strings
(continues on next
page)
# concatenate strings
s3 ='The meaning of l i f e is'
s4 = '42'
s3 + ' ' + s4 # returns
'The meaning of l i f e is 42'
s3 +' ' +str(42) # same
thing
# string formatting
#more examples: http://mkaz.com/2012/10/10/python-string-format/
'pi i s {:.2f}'.format(3.14159) # returns 'pi is 3.14'
Out:
f i r s t line
second line
Out:
f i r s t line\nfirst line
print(s.decode('utf-
8').split())
Out:
b'first line\nsecond line'
[' f ir st' , 'line', 'second', 'line']
2.3.5 Dictionaries
Dictionaries are structures which can contain multiple data types, and is ordered with key-
value pairs: for each (unique) key, the dictionary outputs one value. Keys can be strings,
numbers, or tuples, while the corresponding values can be any Python object. Dictionaries
are: unordered, iterable, mutable
# examine a dictionary
family['dad'] # returns
'homer' len(family) # returns 3
family.keys() # returns l i s t :
['dad', 'mom', 'size']
family.values() # returns l i s t : ['homer', 'marge', 6]
family.items() # returns l i s t of tuples:
# [('dad',
'homer'), ('mom', 'marge'), ('size',
6)]
'mom' in family # returns True
'marge' in family # returns False (only checks keys)
(continues on next
# modify a dictionary (does not return the dictionary) page)
Out:
Error 'grandma'
2.3.6 Sets
Like dictionaries, but with unique keys only (no corresponding values). They are:
unordered, it- erable, mutable, can contain multiple data types made up of unique
elements (strings, numbers, or tuples)
# create an empty set
empty_set = set()
# create a set
languages = # create a set directly
{'python', 'r',
snakes =set(['cobra', 'viper', 'python']) # create a set from a l i s t
'java'}
# examine a set
len(languages) # returns 3
'python' in # returns True
languages
# set operations
languages & snakes # returns intersection: {'python'}
languages | snakes # returns union: {'cobra', 'r', 'java', 'viper', 'python'}
languages - snakes # returns set difference: {'r', 'java'}
snakes - languages # returns set difference: {'cobra', 'viper'}
Out:
Error 'c'
4. Execution control
statements
1. Conditional statements
x= 3
# i f statement
i f x > 0:
print('positive')
# if/else statement
i f x > 0:
print('positive')
else:
print('zero or negative')
# if/elif/else statement
i f x > 0:
print('positive')
elif x == 0:
print('zero')
else:
print('negative')
Out:
positive
positive
positive
positive
2.4.2 Loops
Loops are a set of instructions which repeat until termination conditions are This
met. include iterating through all values in an object, go through a range of can
values, etc
# range returns a l i s t of integers
range(0, 3) # returns [0, 1, 2]: includes f i r s t value but excludes second value
range(3) # same thing: starting at zero is the default
range(0, 5, 2) # returns [0, 2, 4]: third argument specifies the 'stride'
# for loop
fruits =['apple', 'banana', 'cherry']
for i in range(len(fruits)):
print(fruits[i].upper())
# while loop
count =0
while count
< 5:
print("T
his will
print 5
Out:
times")
count +=
APPLE
BANAN 1
A
#
CHERRY
equivale
APPLE
nt to
BANAN
A 'count =
count +
CHERR
Y 1'
dad homer
mom
marge
size 6
1 apple
2 banana
3 cherry
Can't find the banana
Found the banana!
This will print 5
times
This will print 5 times
This will print 5 times
2.4.3 Exceptions handling
This will print 5 times
This will print 5
times
dct =dict(a=[1, 2], b=[4, 5])
key = 'c'
try:
dct[key]
except:
print("Key %s is missing. Add i t with empty value" % key)
dct['c'] = [ ]
print(dct)
Out:
2.5 Functions
Functions are sets of instructions launched when called upon, they can have multiple
input values and a return value
# define a function
with one argument
and no return values
def print_this(x):
# call the function
print(x)
print_this(3) # prints 3
n = print_this(3) # prints 3, but doesn't assign 3 to n
# because the function has
no return statement
#
def add(a, b):
return a + b
add(2, 3)
add("deux", "trois")
add(["deux",
"trois "], [2, 3])
# define a function
with one argument and
one return value
def square_this(x):
return x ** 2
# include an
optional docstring to
# call thethe
describe function
effect
square_this(3)
of a function # prints 9
var =
def square_this(x): # assigns 9 to var, but does not print 9
square_this(3)
"""Return the
# default
squarearguments
of a
def power_this(x,
number.""" power=2):
return x ** power
2
power_this(2) # 4
power_this(2, 3) # 8
2.5. Functions 19
Statistics and Machine Learning in Python, Release 0.3 beta
# return values can be assigned into multiple variables using tuple unpacking
min_num, max_num = min_max(nums) # min_num =1, max_num = 3
Out:
this i s text
3
3
1. List comprehensions
Process which affects whole lists without iterating through loops. For more: http://
python-3-patterns-idioms-test.readthedocs.io/en/latest/Comprehensions.h
tml
# for loop to create a l i s t of cubes
nums =[1, 2, 3, 4, 5]
cubes = [ ]
for num in nums:
cubes. append(num**3
)
# equivalent l i s t
comprehension
cubes = [num**3 for num
in nums]
# equivalent l i s t comprehension
# syntax: [expression for variable in iterable i f condition]
cubes_of_even = [num**3 for num in nums i f num % 2 == 0] # [8,
64]
# equivalent l i s t
comprehension items =
[item for row in matrix
for item in row]
# [1, 2, 3, 4]
# set comprehension
fruits =['apple',
'banana',
# {'apple': 5, 'banana
'cherry']
unique_lengths =
{len(fruit) for
fruit in fruits}
2.7 Regular# expression
{5, 6}
# dictionary
1. Compile Regular expression with a
patetrn comprehension
import re fruit_lengths =
{f ruit:len(fruit)
for fruit
# 1. compile Regular expression
in with a patetrn regex
fruits}
= re.compile("^.+(sub-.+)_(ses-.+)_(mod-.+)")
˓ → ': 6, 'cherry': 6}
Out:
regex.sub("SUB-", "toto")
Out:
8. System programming
Current working
directory
# Get the current working directory
cwd =os.getcwd()
print(cwd)
Out:
/home/edouard/git/pystatsml/python_lang
Temporary
directory
import tempfile
tmpdir =tempfile.gettempdir()
Join paths
Create a
directory
i f not os.path.exists(mytmpdir):
os.mkdir(mytmpdir)
os.makedirs(os.path.join(tmpdir,
"foobar", "plop", "toto"),
exist_ok=True)
# Write
lines =["Dans python tout est bon", "Enfin,
presque"]
# Read
## read one line at a time (entire f i l e does not have to f i t into memory) f =
open(filename, " r " )
f.readline() # one string per line (including newlines)
f.readline() # next line
f.close()
## read one line at a time (entire f i l e does not have to f i t into memory) f =
open(filename, 'r')
f.readline() # one string per line (including newlines)
f.readline() # next line
f.close()
## use a context
manager to
Out:
automatically close
your f i l e
/tmp/foobar/myfile.txt
with open(filename, 'r') as f :
lines =[line for line in f ]
Walk
import os
WD=os.path.join(tmpdir, "foobar")
Out:
tmpdir =
tempfile.gette
mpdir()
shutil.copy(src, dst)
try:
shutil.copytree(src, dst)
shutil.rmtree(dst)
shutil.move(src, dst)
except (FileExistsError,
FileNotFoundError) as e:
pass
Out:
copy /tmp/foobar/myfile.txt to /tmp/foobar/plop/myfile.txt
File /tmp/foobar/plop/myfile.txt exists ? True
copy tree /tmp/foobar/plop under /tmp/plop2
• For more advanced use cases, the underlying Popen interface can be used directly.
• Run the command described by args.
• Wait for command to complete
• return a CompletedProcess instance.
• Does not capture stdout or stderr by default. To do so, pass PIPE for the stdout
and/or stderr arguments.
import subprocess
# Capture output
out =subprocess.run(["ls", "-a", " / " ] , stdout=subprocess.PIPE, stderr=subprocess.STDOUT) #
out.stdout is a sequence of bytes that should be decoded into a utf-8 string
print(out.stdout.decode('utf-8').split("\n")[:5])
Out:
0
[ '. ' , ' . . ' , 'bin', 'boot', 'cdrom']
Process
A process is a name given to a program instance that has been loaded into
memory and managed by the operating system.
Process = address space + execution context (thread of
control) Process address space (segments):
• Code.
• Data (static/global).
• Heap (dynamic memory allocation).
•Stack.
Execution
context:
2.8. System programming 25
Statistics and Machine Learning in Python, Release 0.3 beta
• Data registers.
• Stack pointer (SP).
• Program counter (PC).
• Working Registers.
Threads
• Threads share the same address space (Data registers): access to code,
heap and (global) data.
•Separate execution stack, PC and Working
Registers. Pros/cons
• Faster context switching only SP, PC and Working
Registers.
• Can exploit fine-grain concurrency
• Simple data sharing through the shared address
space.
• Precautions have to be taken or two threads will write to the same
memory at the same time. This is what the global interpreter lock (GIL)
is for.
• Relevant for GUI, I/O (Network, disk) concurrent operation
In Python
• The threading module uses threads.
•The multiprocessing module uses
import time
import
processes. Multithreading
threading
def
list_append(cou
nt, sign=1,
out_list=None):
i f out_list is None:
out_list =l i s t ( )
for i in range(count):
out_list.append(sign * i ) sum(out_list)
# do some computation
return out_list
(continues on next
size = 10000 # Number of numbers to add page)
out_list
26 =l i s t ( ) # result is a simple l i s t Chapter 2. Python language
Statistics and Machine Learning in Python, Release 0.3
beta
startime = time.time()
# Will execute both in parallel
thread1.start()
thread2.start()
# Joins threads back to the parent process
thread1.join()
thread2.join()
print("Threading ellapsed time " ,
time.time() - startime)
print(out_list[:10])
Out:
Multiprocessin
g
import multiprocessing
startime =time.time()
p1.start()
p2. start()
p1.join()
p2.join()
print("Mul
tiprocessi
ng
ellapsed
Out:
time " ,
time.time(
Multiprocessing ellapsed time 0.3927607536315918
)-
startime)
Sharing object between process with Managers
#
Managers provide a way to create data which can be shared between different processes, in-
print(out
cluding
_list[:10]sharing over a network between processes running on different machines. A
manager
) i s not object controls a server process which manages shared objects.
availlable
import
multiprocessing
import time
startime =time.time()
p1.start()
p2.start()
p1.join()
p2.join()
print(out_list[:10])
print("Multiprocessing
with shared object
ellapsed time " ,
Out:
time.time() - startime)
import os
import
os.path
import
argparse
import re
import pandas
as pd
i f name =="
main " :
# parse command line options
output = "word_count.csv"
parser =
argparse.ArgumentParser()
parser.add_argument('-i',
'--input',
help='list of input f il es .' ,
nargs='+', type=str)
parser.add_argument('-o', '--output',
help='output csv f i l e (default %s)' % output,
type=str, default=output)
options =parser.parse_args()
i f options.input is None :
parser.print_help()
raise SystemExit("Error:
input f iles are
missing")
else: (continues on next
filenames =[ f for f in page)
options.input i f
28 os.path.isfile(f)] Chapter 2. Python language
# Match words
regex = re.compile("[a-zA-Z]
Statistics and Machine Learning in Python, Release 0.3
beta
fd =open(options.output, "w") #
Pandas
df =pd.DataFrame([[k, count[k]] for k in count], columns=["word", "count"])
df.to_csv(options.output, index=False)
2.10 Networking
# TODO
2.10.1 FTP
2.10.2 HTTP
# TODO
2.10. Networking 29
Statistics and Machine Learning in Python, Release 0.3 beta
2.10.3 Sockets
# TODO
2.10.4 xmlrpc
# TODO
A module is a Python file. A package is a directory which MUST contain a special file
called
i ni t .py
To import, extend variable PYTHONPATH:
export PYTHONPATH=path_to_parent_python_module:${PYTHONPATH}
Or
import sys
sys.path.append("path_to_parent_python_module")
The i n i t .py file can be empty. But you can set which modules the package exports as the
API, while keeping other modules internal, by overriding the all variable, like so:
parentmodule/ i n i t .py file:
a l l =["submodule1", "submodule2",
"function1", "function2"]
User can
import:
import
parentmodule.submodule1
import
parentmodule.function1
Python Unit Testing
Sources
• http://python-textbok.readthedocs.org/en/latest/Object_Oriented_Programming.html
Principles
• Encapsulate data (attributes) and code (methods) into objects.
Inheritance + Encapsulation
class Square(Shape2D):
def i ni t (self , width):
self.width = width
def area(self):
return self.width ** 2
class Disk(Shape2D):
def i ni t (self , radius):
self.radius = radius
def area(self):
return math.pi *
self.radius ** 2
# Polymorphism
print([s.area() for s in
shapes])
s = Shape2D()
try:
s.area()
except NotImplementedError as e:
[4, 28.274333882308138]
NotImplementedError
13. Exercises
1. Exercise 1: functions
Create a function that acts as a simple calulator If the operation is not specified, default to
addition If the operation is misspecified, return an prompt message Ex:
calc(4,5,"multiply") returns 20 Ex: calc(3,5) returns 8 Ex: calc(1, 2, "something")
returns error message
Given a list of numbers, return a list where all adjacent duplicate elements have been
reduced to a single element. Ex: [1, 2, 2, 3, 2] returns [1, 2, 3, 2]. You may create a
new list or modify the passed in list.
Remove all duplicate values (adjacent or not) Ex: [1, 2, 2, 3, 2] returns [1, 2, 3]
THREE
SCIENTIFIC PYTHON
NumPy is an extension to the Python programming language, adding support for large,
multi- dimensional (numerical) arrays and matrices, along with a large library of high-
level mathe- matical functions to operate on these arrays.
Sources:
• Kevin Markham: https://github.com/justmarkham
import numpy as np
Create ndarrays from lists. note: every element must be the same type (will be converted
if possible)
33
Statistics and Machine Learning in Python, Release 0.3 beta
Out:
(2, 5)
[[0. 1.]
[2. 3.]
[4. 5.]
[6. 7.]
[8. 9.] ]
Add an
axis
a =np.array([0, 1])
a_col =a [ : , np.newaxis]
print(a_col)
#or
a_col =a [ : , None]
Out:
[[0]
[1]]
Transpos
e
print(a_col.T)
Out:
[[0 1]]
Out:
[33. 1. 2. 3. 4. 5. 6. 7. 8. 9.]
[[0. 1. 2. 3. 4.]
[5. 6. 7. 8. 9.]]
Out:
[33. 1. 2. 3. 4. 5. 6. 7. 8. 9.]
[[33. 1. 2. 3. 4.]
[ 5. 6. 7. 8. 9.] ]
Numpy internals: By default Numpy use C convention, ie, Row-major language: The
matrix is stored by rows. In C, the last index changes most rapidly as one moves
through the array as stored in memory.
For 2D arrays, sequential move in the memory will:
• iterate over rows (axis 0)
– iterate over columns (axis 1)
For 3D arrays, sequential move in the memory will:
• iterate over plans (axis 0)
– iterate over rows (axis 1)
iterate over columns (axis 2)
x =np.arange(2 * 3 * 4)
print(x)
Out:
[ 0 12 3456 78 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23]
Out:
[ [ [ 0 1 2 3]
[ 4 5 6 7]
[ 8 9 10 11]]
[[12 13 14 15]
[16 17 18 19]
[20 21 22 23]]]
Out:
[ [ 0 1 2 3]
[ 4 5 6 7]
[ 8 9 10 11]]
Out:
[ [ 0 1 2 3]
[12 13 14 15]]
Out:
[ [ 0 4 8]
[12 16 20]]
Rave
l
print(x.ravel())
Out:
[ 0 12 3456 78 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23]
ab =np.stack((a, b)).T
print(ab)
# or
np.hstack((a[:, None],
b[: , None]))
Out:
[[0 2]
[1 3]]
3.1.6 Selection
Single item
Slicing
Syntax: start:stop:step with start (default 0) stop (default last) step (default
1)
arr[0, : ] # row 0: returns 1d array ([1, 2, 3, 4])
a r r [ : , 0] # column 0: returns 1d array ([1, 5])
a r r [ : , :2] # columns strictly before index 2 (2 f i r s t columns)
a r r [ : , 2:] # columns after index 2 included
arr2 = a r r [ : , 1:4] # columns between index 1 (included) and 4 (excluded)
print(arr2)
Out:
[[1. 2. 3.]
[6. 7. 8.]]
Out:
[[33. 2. 3.]
[ 6. 7. 8.] ]
[ [ 0. 33. 2. 3. 4.]
[ 5. 6. 7. 8. 9.]]
Row 0: reverse
order
print(arr[0, ::-1])
# The rule of thumb here can be: in the context of lvalue indexing ( i . e . the indices are␣
˓→placed in the lef t hand side value of an assignment), no view or copy of the array i s ␣
˓→created (because there is no need to). However, with regular values, the above rules␣
˓→ for creating views does apply.
Out:
[ 4. 3. 2. 33. 0.]
Out:
[[33. 2. 3.]
[ 6. 7. 8.]]
[[44. 2. 3.]
[ 6. 7. 8.]]
[ [ 0. 33. 2. 3. 4.]
[ 5. 6. 7. 8. 9.]]
Boolean arrays
indexing
arr2 =arr[arr >5] # return a copy
print(arr2)
arr2[0] =44
print(arr2)
print(arr)
Out:
[33. 6. 7. 8. 9.]
[44. 6. 7. 8. 9.]
[ [ 0. 33. 2. 3. 4.]
[ 5. 6. 7. 8. 9.] ]
However, In the context of lvalue indexing (left hand side value of an assignment) Fancy
autho- rizes the modification of the original array
arr[arr >5] =0
print(arr)
Out:
[[0. 0. 2. 3. 4.]
[5. 0. 0. 0. 0.]]
nums =np.arange(5)
nums * 10 # multiply each element by 10
nums =np.sqrt(nums) # square root of each element
np.ceil(nums) # also floor, rint (round to nearest i nt)
np.isnan(nums) # checks for NaN
nums + np.arange(5) # add element-wise
np.maximum(nums, np.array([1, -2, 3, -4, 5])) # compare element-wise
# random numbers
np. random. seed(12234 # Set the seed
) np.random.rand(2, # 2 x 3 matrix in [0, 1]
3) # random normals (mean 0, sd 1)
np.random.randint(0,
np.random.randn(10) 2, 10) # 10 randomly picked 0 or 1
8. Broadcasting
Rules
Starting with the trailing axis and working backward, Numpy compares arrays
dimensions.
• If two dimensions are equal then continues
• If one of the operand has dimension 1 stretches it to match the largest one
• When one of the shapes runs out of dimensions (because it has less dimensions
than the other shape), Numpy will use 1 in the comparison process until the other
shape’s dimensions run out as well.
Fig. 1: Source:
http://www.scipy-lectures.org
a = np.array([[ 0, 0, 0],
[10, 10, 10],
[20, 20, 20],
[30, 30, 30]])
b =np.array([0, 1, 2])
(continues on next
page)
Out:
[ [ 0 1 2]
[10 11 12]
[20 21 22]
[30 31 32]]
Examples
Shapes of operands A, B and
result:
A (2d array): 5 x 4
1
(1d
5 x4
B array):
R (2d array):
e
s
u
l
t
A (2d array): 5x4
B (1d array): 4
Result (2d array): 5x4
A (3d array): 15 x 3 x 5
B (3d array): 15 x 1 x 5
Result (3d array): 15 x 3 x 5
A (3d array): 15 x 3 x 5
B (2d array): 3x5
Result (3d array): 15 x 3 x 5
A (3d array): 15 x 3 x 5
B (2d array): 3x1
Result (3d array): 15 x 3 x 5
3.1.9 Exercises
X =np.random.randn(4,
Given the array: 2) # random normals in 4x2 array
• For each column find the row index of the minimum value.
• Write a function standardize(X) that return an array whose columns are centered
and scaled (by std-dev).
Total running time of the script: ( 0 minutes 0.039 seconds)
It is often said that 80% of data analysis is spent on the cleaning and small, but
important, aspect of data manipulation and cleaning with Pandas.
Sources:
• Kevin Markham: https://github.com/justmarkham
• Pandas doc: http://pandas.pydata.org/pandas-docs/stable/index.html
Data structures
• Series is a one-dimensional labeled array capable of holding any data type (inte- gers,
strings, floating point numbers, Python objects, etc.). The axis labels are col-
lectively referred to as the index. The basic method to create a Series is to call
pd.Series([1,3,5,np.nan,6,8])
• DataFrame is a 2-dimensional labeled data structure with columns of potentially
different types. You can think of it like a spreadsheet or SQL table, or a dict of
Series objects. It stems from the R data.frame() object.
import pandas as
pd import numpy as
np
import
matplotlib.pyplot
as plt
3.2.1 Create DataFrame
print(user3)
Out:
Concatenate DataFrame
user1.append(user2)
users =pd.concat([user1, user2, user3])
print(users)
Out:
Out:
name height
0 alice 165
1 john 180
2 eric 175
3 julie 171
print(merge_inter)
Out:
name age gende job height
r
0 alice 19 F student 165
1 john 26 M student 180
2 eric 22 M student 175
3 julie 44 F scientist 171
Out:
name age gender job height
0 alice 19 F student 165.0
1 john 26 M student 180.0
(continues on next
page)
Reshaping by pivoting
Out:
name variable value
0 alice age 19
1 age 26
john
2 eric age 22
3 age 58
paul
4 age 33
peter
5 julie age 44
6 alice gender F
7 gender M
john
8 eric gender M
9 gender F
paul
10 peter gender M
11 julie gender F
12 alice job student
13 job student
john
14 eric job student
15 job manager
paul
16 peter job engineer
17 julie job scientist
18 alice height 165
19 height 180
john
20 eric height 175
“pivots” a DataFrame
21 height fromNaNlong (stacked) format to wide
format,
paul
print(staked.pivot(index='name',
22 peter height NaN columns='variable', values='value'))
23 julie height 171
Out:
3.2.3 Summarizing
for i in range(users.shape[0]):
row =df .iloc[i] (continues on next
page)
Out:
/home/edouard/anaconda3/lib/python3.7/site-packages/pandas/core/generic.py:5096:␣
˓→SettingWithCopyWarning:
A value i s trying to be set on a copy of a slice from a DataFrame
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/
˓→indexing.html#indexing-view-versus-copy
self[name] =value
name age gende job height
r
0 55 19 F student 165.0
1 john 26 M student 180.0
2 eric 22 M student 175.0
3 paul 58 F manager NaN
4 peter 33 M engineer NaN
5 julie 44 F scientist 171.0
for i in range(df.shape[0]):
df.loc[i, "age"] *= 10
print(df) # df is modified
Out:
name age gende job height
r
0 alice 550 F student 165.0
1 john 260 M student 180.0
2 eric 220 M student 175.0
3 paul 580 F manager NaN
4 peter 330 M engineer NaN
5 julie 440 F scientist 171.0
Out:
3.2.7 Sorting
df = users.copy()
print(df)
Out:
Out:
nam age gender job height
e
count 6 6.000000 6 6 4.000000
unique 6 NaN 2 4 NaN
top eric NaN M student NaN
freq 1 NaN 3 3 NaN
mean NaN 33.666667 NaN NaN 172.75000
0
std NaN 14.895189 NaN NaN 6.344289
min NaN 19.000000 NaN NaN 165.00000
0
25% NaN 23.000000 NaN NaN 169.50000
0
50% NaN 29.500000 NaN NaN 173.00000
0
75% NaN 41.250000 NaN NaN 176.25000
0
max NaN 58.000000 NaN NaN 180.00000
0
nam gender job
e
Statistics per
count 6 group
6
(groupby)
print(df.groupby("job").mean())
6
unique 6 2
print(df.groupby("job")["age"].mean())
4
top eric M student
print(df.groupby("job").describe(include='all'))
freq 1 3
Out: 3
age height
job
engineer 33.000000 NaN
manager 58.000000 NaN
scientist 44.000000 171.0000
00
student 22.333333 173.3333
33
job
engineer 33.000000
manager 58.000000
scientist 44.000000
student 22.333333
Name: age, dtype: float64
name . . . height
count unique top freq mean ... 25% 50% 75% max
min
job ...
engineer 1 1 peter 1 NaN ... NaN NaN NaN NaN NaN
manager 1 1 paul 1 NaN ... NaN NaN NaN NaN NaN
scientist 1 1 julie 1 NaN . . . 171.0 171.0 171.0 171.0 171.0
student 3 3 eric 1 NaN . . . 165.0 170.0 175.0 177.5 180.0
[4 rows x 44 columns]
Groupby in a
loop
for grp, data in df.groupby("job"):
print(grp, data)
Out:
Out:
1 False
2 False
3 False
4 False
5 False
6 False
7True
dtype: bool
Missing
data
# Missing values are often just excluded
df = users.copy()
Strategy 2: fi ll in missing
values
df.height.mean()
df = users.copy()
df.loc[df.height
.isnull(),
"height"] =
df["height"].mea
n()
Out:
print(df)
name age gende job height
r
0 alice 19 F student 165.00
1 john 26 M student 180.00
2 eric 22 M student 175.00
3 paul 58 F manager 172.75
4 peter 33 M engineer 172.75
5 julie 44 F scientist 171.00
df =users.copy()
print(df. columns)
df.columns =
['age', 'genre',
'travail',
'nom', 'taille']
df['travail'].str.contains("etu|inge")
Out:
Assume random variable follows the normal distribution Exclude data outside 3 standard-
deviations: - Probability that a sample lies within 1 sd: 68.27% - Probability that a
sample lies within 3 sd: 99.73% (68.27 + 2 15.73)
size_outlr_mean = size.copy()
size_outlr_mean[((size - size.mean()).abs() >3 * size.std())] =size.mean()
print(size_outlr_mean.mean())
Out:
248.48963819938044
Out:
173.80000467192673 178.7023568870694
csv
Exce
l
xls_filename =os.path.join(tmpdir, "users.xlsx")
users.to_excel(xls_filename, sheet_name='users',
index=False)
pd.read_excel(xls_filename, sheet_name='users')
# Multiple sheets
with pd.ExcelWriter(xls_filename) as writer:
users.to_excel(writer, sheet_name='users', (continues on next
index=False) page)
pd.read_excel(xls_filename, sheet_name='users')
pd.read_excel(xls_filename, sheet_name='salary')
SQL
(SQLite)
import pandas as
pd import sqlite3
db_filename =
os.path.join(tmpdir
,Connect
"users.db")
conn =sqlite3.connect(db_filename)
Push
modifications
cur = conn.cursor()
values =(100, 14000, 5, 'Bachelor', 'N')
cur.execute("insert into salary values ( ? , ? , ? , ? , ? ) " , values)
conn.commit()
Out:
index salary experience education management
0 0 13876 1 Bachelor Y
1 1 11608 1 Ph.D N
2 2 18701 1 Ph.D Y
3 3 11283 1 Master N
4 4 11767 1 Ph.D N
3.2.13 Exercises
Data Frame
Missing data
Out:
/home/edouard/git/pystatsml/scientific_python/scipy_pandas.py:440: DeprecationWarning:
.ix is deprecated. Please use
.loc for label based indexing or
.iloc for positional indexing
1. Basic plots
import numpy as np
import matplotlib.pyplot as plt
plt.plot(x, sinus)
plt.show()
# Rapid multiplot
cosinus = np.cos(x)
plt.plot(x, sinus, "-b", x, sinus, "ob", x, cosinus, "-r", x, cosinus, "or")
plt.xlabel('this is x!')
plt.ylabel('this is y!')
plt.title('My First Plot')
plt.show()
# Step by step
plt.plot(x, sinus, label='sinus', color='blue', linestyle='--', linewidth=2)
(continues on next
page)
Load dataset
import pandas as
pd try:
salary =
pd.read_csv("..
/datasets/sala
ry_table.csv")
except:
url ='https://raw.github.com/neurospin/pystatsml/master/datasets/salary_table.csv'
salary =pd.read_csv(url)
df = salary
Simple scatter with
colors
colors =colors_edu ={'Bachelor':'r', 'Master':'g', 'Ph.D':'blue'}
plt.scatter(df['experience'], df['salary'], c=df['education'].apply(lambda x: colors[x]),␣
˓→s=100)
<matplotlib.collections.PathCollection at 0x7f39efac6358>
s=150, label=manager+"/"+edu)
## Set labels
plt.xlabel('Experience')
plt.ylabel('Salary')
plt.legend(loc=4) # lower right
plt.show()
# Prefer vectorial format (SVG: Scalable Vector Graphics) can be edited with #
Inkscape, Adobe Illustrator, Blender, etc.
plt.plot(x, sinus)
plt. savefig(" sinus.svg" )
plt.close()
# Or pdf
plt.plot(x, sinus)
plt. savefig(" sinus.pdf" )
plt.close()
3.3.4 Seaborn
Sources: - http://stanford.edu/~mwaskom/software/seaborn -
https://elitedatascience.com/ python-seaborn-tutorial
If needed, install using: pip install -U --user seaborn
Boxplot
Box plots are non-parametric: they display variation in samples of a statistical population
with- out making any assumptions of the underlying statistical distribution.
Fig. 2:
title
import seaborn as sns
<matplotlib.axes._subplots.AxesSubplot at 0x7f39ed42ff28>
<matplotlib.axes._subplots.AxesSubplot at 0x7f39eb61d780>
i =0
for edu, d in salary.groupby(['education']):
sns.distplot(d.salary[d.management =="Y"], color="b",
bins=10, label="Manager",␣
˓→ax=axes[i])
sns.distplot(d.salary[d.management =="N"], color="r",
bins=10, label="Employee",␣
˓→ax=axes[i])
axes[i].set_title(edu)
axes[i]. set_ylabel('Density')
i += 1
ax = plt.legend()
ax =sns.violinplot(x="salary", data=salary)
Tune
bandwidth
ax =sns.violinplot(x="salary", data=salary, bw=.15)
Tips dataset One waiter recorded information about each tip he received over a period of a
few months working in one restaurant. He collected several variables:
ax =
sns.violinplot(x=tips["total_
bill"])
total_bill tip sex smoker daytime size No
0 16.99 1.01 Sun Dinner 2
Female No Sun Dinner
1 10.34 1.66 3
Male No Sun Dinner
2 21.01 3.50 3
Male No Sun Dinner
3 23.68 3.31 2
Male No Sun Dinner
4 24.59 3.61 4
Female
Group by
day
ax =sns.violinplot(x="day", y="total_bill", data=tips, palette="muted")
Pairwise scatter
plots
g =sns.PairGrid(salary, hue="management")
g.map_diag(plt.hist)
g.map_offdiag(plt.scatter)
ax = g.add_legend()
ax =sns.pointplot(x="timepoint", y="signal",
hue="region", style="event",
data=fmri)
# version 0.9
# sns.lineplot(x="timepoint", y="signal",
# hue="region", style="event",
# data=fmri)
FOUR
STATIS
TICS
1. Univariate statistics
Mean
𝐸(𝑋 + 𝑐) = 𝐸 (𝑋 ) + 𝑐 (4.1
𝐸(𝑋 + 𝑌 ) = 𝐸 (𝑋 ) + )
𝐸(𝑌 ) (4.2
𝐸(𝑎𝑋) = 𝑎𝐸(𝑋) )
(4.3
The estimator �¯ on a sample of size �: � = �1, ..., ��
)
is given by
1 ∑︁
�¯ ��
�
= �
Varianc
e
2
2
2
𝑉𝑎�(𝑋) = 𝐸 ((𝑋 − 𝐸 (𝑋 )) ) = 𝐸(𝑋 ) −
(𝐸 (𝑋 ))
69
Statistics and Machine Learning in Python, Release 0.3 beta
The estimator is
1 ∑︁
2
𝜎=
� (��− 2
�−
� �¯)
1
Note here the subtracted 1 degree of freedom (df) in the divisor. In standard statistical
practice,
𝑓𝑑= 1 provides an unbiased estimator of the variance of a hypothetical infinite population.
With 𝑓𝑑 = 0 it instead provides a maximum likelihood estimate of the variance for
normally distributed variables.
Standard deviation
√︀
The estimator is simply 𝜎� = 𝜎�
2. 𝑆�𝑑(𝑋) = 𝑉𝑎�(𝑋)
√
︀
Covariance
Correlatio
n
𝐶��(𝑋, 𝑌 )
𝐶��(𝑋, 𝑌) =
𝑆�(𝑋)𝑆
𝑑 �(𝑌𝑑 )
The estimator
is 𝜎��
𝜌� .
=� 𝜎�𝜎�
The standard error (SE) is the standard deviation (of the sampling distribution) of a
statistic: 𝑆�√𝑑(𝑋)
𝑆𝐸(𝑋) = �
.
It is most commonly considered for the mean with the
estimator
Exercises
• Generate 2 random samples: � ∼ 𝑁 (1.78, 0.1) and � ∼ 𝑁 (1.66, 0.1), both of size
10.
• Compute �¯, 𝜎�, 𝜎�� (xbar, xvar, xycov) using only the np.sum() operation. Explore
the np. module to find out which numpy functions performs the same computations
and compare them (using assert) with your previous results.
Normal distribution
The normal distribution, noted �(𝜇 , 𝜎) with parameters: 𝜇mean (location) and 𝜎> 0 std-
dev. Estimators: �¯ and 𝜎�.
The normal distribution, noted � , is useful because of the central limit theorem (CLT)
which states that: given certain conditions, the arithmetic mean of a sufficiently large
number of iter- ates of independent random variables, each with a well-defined expected
import numpy as np
value
importand well-defined variance,
matplotlib.pyplot as will be approximately normally distributed, regardless of
the
plt underlying distribution.
from scipy.stats import
norm
%matplotlib inline
mu =0 # mean
variance =2 #variance
sigma =np.sqrt(variance)
#standard deviation",
x =np.linspace(mu-3*variance,mu+3*variance, 100)
plt.plot(x, norm.pdf(x, mu, sigma))
[<matplotlib.lines.Line2D at 0x7f5cd6d3afd0>]
The chi-square or 𝜒�2 distribution with �degrees of freedom (df) is the distribution of a
the
sumsquares
of of �independent standard normal random variables � ( 0 , 1). Let 𝑋 ∼ �(𝜇 ,
𝜎2), then, 𝑍 = (𝑋 − 𝜇)/𝜎 ∼ � ( 0 , 1), then:
• The squared standard 𝑍2∼ 𝜒(one
2
1 df).
∑︀
• The distribution of sum of squares of � normal random variables: �
�
2 𝑍 ∼ 𝜒2
�
�
The sum of two 𝜒2 RV with � and � df is a 𝜒2 RV with � + � df. This is useful when
sum- ming/subtracting sum of squares.
The 𝜒2-distribution is used to model errors measured as sum of squares or the
distribution of the sample variance.
The 𝐹-distribution, 𝐹�� , , with �and � degrees of freedom is the ratio of two independent
𝜒variables.
2 Let 𝑋 ∼ 𝜒2�
and 𝑌∼ 𝜒 then:
2
�
𝑋/
𝐹�,� = �
𝑌/�
The 𝐹-distribution plays a central role in hypothesis testing answering the question: Are
two variances equals?, is the ratio or two errors significantly large ?.
import numpy as np
from scipy.stats import f
import matplotlib.pyplot as plt
%matplotlib inline
72 Chapter 4. Statistics
Statistics and Machine Learning in Python, Release 0.3
beta
The Student’s �-
distribution
Let 𝑀 ∼ � ( 0 , 1) and 𝑉∼2�𝜒. The �-distribution,
� 𝑇, with �degrees of freedom is the
ratio:
�
𝑇�= √ ︀ �𝑉/�
The distribution of the difference between an estimated parameter and its true (or
assumed) value divided by the standard deviation of the estimated parameter (standard
error) follow a
�-distribution. Is this parameters different from a given value?
3. Hypothesis Testing
Examples
• Test a proportion: Biased coin ? 200 heads have been found over 300 flips, is it
coins biased ?
• Test the association between two variables.
– Exemple height and sex: In a sample of 25 individuals (15 females, 10 males),
is female height is different from male height ?
– Exemple age and arterial hypertension: In a sample of 25 individuals is age
height correlated with arterial hypertension ?
Steps
1. Model the data
2. Fit: estimate the model parameters (frequency, mean, correlation, regression
coeficient)
3. Compute a test statistic from model the parameters.
4. Formulate the null hypothesis: What would be the (distribution of the) test statistic
if the observations are the result of pure chance.
5. Compute the probability (�-value) to obtain a larger value for the test statistic by
chance (under the null hypothesis).
Biased coin ? 2 heads have been found over 3 flips, is it coins biased ?
1. Model the data: number of heads follow a Binomial disctribution.
2. Compute model parameters: N=3, P = the frequency of number of heads over the
number of flip: 2/3.
3. Compute a test statistic, same as frequency.
4. Under the null hypothesis the distribution of the number of tail is:
1 2 3 count
#heads
0
H 1
H 1
H 1
H H 2
H H 2
H H 2
H H H 3
Text(0.5, 0, 'Distribution of the number of head over 3 f l ip under the null hypothesis')
74 Chapter 4. Statistics
Statistics and Machine Learning in Python, Release 0.3
beta
3. Compute the probability (�-value) to observe a value larger or equal that 2 under the
null hypothesis ? This probability is the �-value:
Biased coin ? 60 heads have been found over 100 flips, is it coins biased ?
1. Model the data: number of heads follow a Binomial disctribution.
2. Compute model parameters: N=100, P=60/100.
3. Compute a test statistic, same as frequency.
4. Compute a test statistic: 60/100.
5. Under the null hypothesis the distribution of the number of tail (�) follow the
binomial distribution of parameters N=100, P=0.5:
(︂ )︂
10
0 0.5 (1 − 0.5)
𝑃 �(𝑋 = �|𝐻 0 ) = 𝑃 �(𝑋 = �|� = 100, �
�
= 0.5) = (100−�)
.
�
100 (︂
︁∑ )︂
10
𝑃(𝑋 = � ≥ 60|𝐻 0 ) 0 0.5 (1 − 0.5)
�
= �= 6 (100−�)
�(︂ )︂
0
︁∑60
10
0� 0.5 (1 − 0.5)
= 1 � (100−�)
, the cumulative distribution
− �= function.
1
import scipy.stats
import matplotlib.pyplot as plt
0.01760010010885238
pval =1 - scipy.stats.binom.cdf(60, 100, 0.5)
print(pval)
[60 52 51 . . . 45 51 44]
P-value using monte-carlo sampling of the Binomial distribution under H0= 0.
˓→025897410258974102
76 Chapter 4. Statistics
Statistics and Machine Learning in Python, Release 0.3
beta
The one-sample �-test is used to determine whether a sample comes from a population
with a specific mean. For example you want to test if the average height of a population is
1.75 �.
1 Model the data
Assume that height is normally distributed: 𝑋 ∼ �(𝜇 , 𝜎), ie:
theorem, if the sampling of the parent population is independent then the sample means
will be approximately normal.
4 Compute the probability of the test statistic under the null hypotheis. This require to
have the distribution of the t statistic under 𝐻 0 .
Example
Given the following samples, we will test whether its true mean is 1.75.
Warning, when computing the std or the variance, set ddof=1. The default value, ddof=0,
leads to the biased estimator of the variance.
import numpy as np
x = [1.83, 1.83, 1.73, 1.82, 1.83, 1.73, 1.99, 1.85, 1.68, 1.87]
print(xbar)
1.816
2.3968766311585883
The :math:‘p‘-value is the probability to observe a value � more extreme than the
observed one
��𝑏under the null hypothesis 𝐻 0 : 𝑃(�> ��� 𝑏 |𝐻0)
import scipy.stats as stats
import matplotlib.pyplot as
plt
_ = plt.legend()
78 Chapter 4. Statistics
Statistics and Machine Learning in Python, Release 0.3
beta
Fig. 1: Statistical
tests
80 Chapter 4. Statistics
Statistics and Machine Learning in Python, Release 0.3
beta
import numpy as np
import scipy.stats as stats
n = 50
x = np.random.normal(size=n)
y =2 * x +np.random.normal(size=n)
0.904453622242007 2.189729365511301e-19
The two-sample �-test (Snedecor and Cochran, 1989) is used to determine if two
population means are equal. There are several variations on this test. If data are paired
(e.g. 2 measures, before and after treatment for each individual) use the one-sample �-
test of the difference. The variances of the two samples may be assumed to be equal (a.k.a.
homoscedasticity) or unequal (a.k.a. heteroscedasticity).
Assume that the two random variables are normally distributed: �1∼ �(𝜇 1 , 𝜎1), �2 ∼
�(𝜇 2 , 𝜎2).
3. �-test
difference of means
(4.11
�=
standard dev of )
difference of
=
error (4.12
means its
)
�¯ 1 − �¯2error
standard √ (4.13
= √︀∑︀ 𝜀 2 �− 2
)
�¯1 −
�¯2 (4.14
= ��¯1 − )
�¯2
Since �1and �2are
independant:
��2 � (4.15
��¯1−�¯= �
�¯+
2
2 1 = + 1 2
2 �¯2 �2 �21
� )
thus √︃ (4.16
�
2
� �2 )(4.17
�
�¯1 − = �1 �2
�¯2
�1+ 2
)
�2
Equal or unequal sample sizes, unequal variances (Welch’s �-
test)
2 (� − 1) +
� 2 � (�
� 1 � 2
� −1 1)
2 (4.18
�1+ �22−
= ∑︀ �1 ∑︀ �2 )
� (� 2 − �¯
1� 1 ) +� 2�− 2
2 2
= (�(�1 − 1) + (�2 − �¯ ) (4.19
)
1)
the
n √︃ √︂
2
�+ = 1 +
��¯1−�¯2 ��
1 �
2
2 1
�1 �2
�
=
82 Chapter 4. Statistics
Statistics and Machine Learning in Python, Release 0.3
beta
Therefore, the � statistic, that is used to test whether the means are different is:
�¯1 − ,
� =�¯√ ︁2 1 1
�
· �1 + �2
�¯
1 −2 √
� (4.20
�¯�√ ·
= )
2� size ·
≈ effect
√ difference of means √ (4.21
≈� · � )
standard deviation of
the noise (4.22
Example )
Given the following two samples, test whether their means are equal using the standard t-
test, assuming equal variance.
import scipy.stats as stats
height = np.array([ 1.83, 1.83, 1.73, 1.82, 1.83, 1.73, 1.99, 1.85, 1.68, 1.87,
1.66, 1.71, 1.73, 1.64, 1.70, 1.60, 1.79, 1.73, 1.62, 1.77])
Analysis of variance (ANOVA) provides a statistical test of whether or not the means of
several groups are equal, and therefore generalizes the �-test to more than two groups.
ANOVAs are useful for comparing (testing) three or more means (groups or variables) for
statistical signifi- cance. It is conceptually similar to multiple two-sample �-tests, but is
less conservative.
Here we will consider one-way ANOVA with one independent variable, ie one-way
anova. Wikipedia:
• Test if any group is on average superior, or inferior, to the others versus the null
hypothesis that all four strategies yield the same mean response
• Detect any of several possible differences.
• The advantage of the ANOVA 𝐹 -test is that we do not need to pre-specify which
strategies are to be compared, and we do not need to adjust for making multiple
comparisons.
4.1. Univariate statistics 83
Statistics and Machine Learning in Python, Release 0.3 beta
• The disadvantage of the ANOVA 𝐹-test is that if we reject the null hypothesis, we do
not know which strategies can be said to be significantly different from the others.
A company has applied three marketing strategies to three samples of customers in order
in- crease their business volume. The marketing is asking whether the strategies led to
different increases of business volume. Let �1, �2and �3be the three samples of business
volume increase.
Here we assume that the three populations were sampled from three random variables
that are normally distributed. I.e., 𝑌1 ∼ 𝑁 (𝜇1, 𝜎1), 𝑌2 ∼ 𝑁 (𝜇2, 𝜎2) and 𝑌3 ∼ 𝑁 (𝜇3, 𝜎3).
3. 𝐹 -test
Explained variance
(4.23
𝐹=
Unexplained variance )
Between-group �
= = 2𝐵 .
(4.24
variability Within-
�𝑊 )
group variability 2
The “explained variance”, or “between-group variability”
is ∑︁
�2
𝐵 = � � − �¯ 2
)/ ( 𝐾 −
(�¯ � �· 1),
where �¯�· denotes the sample mean in the �th group, �� is the number of observations
group, �¯h denotes the overall mean of the data, and 𝐾 denotes the number of
in the �t
groups. The “unexplained variance”, or “within-group variability” is
∑︁
��− �¯ ) / ( 𝑁
2
�𝑊 =
2 (��� �·− 𝐾 ),
where �� � is the �th observation in the �th out of 𝐾 groups and 𝑁 is the overall sample
size. This 𝐹-statistic follows the 𝐹-distribution with 𝐾 − 1 and 𝑁 − 𝐾 degrees of freedom
under the null hypothesis. The statistic will be large if the between-group variability is
large relative to the within-group variability, which is unlikely to happen if the
population means of the groups all have the same value.
Note that when there are only two groups for the one-way ANOVA F-test, 𝐹= �2 where �
is the Student’s � statistic.
84 Chapter 4. Statistics
Statistics and Machine Learning in Python, Release 0.3
beta
Computes the chi-square, 𝜒2, statistic and �-value for the hypothesis test of independence
of frequencies in the observed contingency table (cross-table). The observed frequencies are
tested against an expected contingency table obtained by computing expected frequencies
based on the marginal sums under the assumption of independence.
Example: 20 participants: 10 exposed to some chemical product and 10 non exposed
(exposed
= 1 or 0). Among the 20 participants 10 had cancer 10 not (cancer = 1 or 0). 𝜒2 tests
import numpy as np
the association
import pandas asbetween those two variables.
pd
import
scipy.stats as
stats
# Dataset:
# 15 samples:
# 10 f i r s t exposed
exposed =
np.array([1] * 10 +
[0] * 10)
# 8 f i r s t with cancer, 10 without, the last two with.
cancer =np.array([1] * 8 +[0] * 10 +[1] * 2)
[[5. 5.]
[5. 5.] ]
(continues on next
page)
print('Expected frequencies:')
print(np.outer(exposed_freq, cancer_freq))
plt.plot(x, y, "bo")
# Non-Parametric Spearman
cor, pval =stats.spearmanr(x, y)
print("Non-Parametric Spearman
cor test, cor: %.4f, pval: %.4f"
% (cor, pval))
86
# "Parametric Pearson cor test Chapter 4. Statistics
cor, pval =stats.pearsonr(x, y)
print("Parametric Pearson cor
test: cor: %.4f, pval: %.4f" %
Statistics and Machine Learning in Python, Release 0.3
beta
Source: https://en.wikipedia.org/wiki/Wilcoxon_signed-rank_test
The Wilcoxon signed-rank test is a non-parametric statistical hypothesis test used when
com- paring two related samples, matched samples, or repeated measurements on a single
sample to assess whether their population mean ranks differ (i.e. it is a paired difference
test). It is equivalent to one-sample test of the difference of paired samples.
It can be used as an alternative to the paired Student’s �-test, �-test for matched pairs, or
the �- test for dependent samples when the population cannot be assumed to be normally
distributed.
When to use it? Observe the data distribution: - presence of outliers - the distribution of
the residuals is not Gaussian
It has a lower sensitivity compared to �-test. May be problematic to use when the sample
size is small.
Null hypothesis 𝐻 0 : difference between the pairs follows a symmetric distribution around
import scipy.stats as stats
zero.
n = 20
# Buisness Volume time 0
bv0 =np.random.normal(loc=3, scale=.1, size=n)
# Buisness Volume time 1
bv1 =bv0 +0.1 +np.random.normal(loc=0,
scale=.1, size=n)
# create an outlier
bv1[0] -= 10
# Paired t-test
(continues on next
page)
# Wilcoxon
print(stats.wilcoxon(bv0, bv1))
Ttest_relResult(statistic=0.8167367438079456, pvalue=0.4242016933514212)
WilcoxonResult(statistic=40.0, pvalue=0.015240061183200121)
# create an outlier
bv1[0] -= 10
# Two-samples t-test
print(stats.ttest_ind(bv0, bv1))
# Wilcoxon
print(stats.mannwhitneyu(bv0, bv1))
Ttest_indResult(statistic=0.6227075213159515, pvalue=0.5371960369300763)
MannwhitneyuResult(statistic=43.0, pvalue=1.1512354940556314e-05)
• The 𝛽’s are the model parameters, ie, the regression coeficients.
independent variable. Other factors (such as what they eat, how much they go to
school, how much television they watch) aren’t going to change a person’s age. In fact,
when you are looking for some kind of relationship between variables you are trying
to see if the independent variable causes some kind of change in the other variables,
or dependent variables. In Machine Learning, these variables are also called the
predictors.
• A dependent variable. It is something that depends on other factors. For example, a
test score could be a dependent variable because it could change depending on several
factors such as how much you studied, how much sleep you got the night before you
took the test, or even how hungry you were when you took it. Usually when you are
looking for a relationship between two things you are trying to find out what makes
the dependent variable change the way it does. In Machine Learning this variable is
called a target variable.
Using the dataset “salary”, explore the association between the dependant variable (e.g.
Salary) and the independent variable (e.g.: Experience is quantitative).
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
url ='https://raw.github.com/neurospin/pystatsml/master/datasets/salary_table.csv'
salary =pd.read_csv(url)
Model the data on some hypothesis e.g.: salary is a linear function of the
experience.
more generally
�� = 𝛽 �� + 𝛽0 + 𝜖�
Recall from calculus that an extreme point can be found by computing where the derivative
is zero, i.e. to find the intercept, we perform the steps:
𝜕𝑆𝑆𝐸 ∑︁
= (�� − 𝛽��− 𝛽0) =
𝜕𝛽0 0 �
∑︁ ∑︁
�� = 𝛽 �� + 0
� �𝛽 �
� �¯ = � 𝛽 �¯ + �𝛽0
𝛽0 = �¯ − 𝛽 �¯
∑︁
��(�� − 𝛽�� − �¯ +
�
∑︁ 𝛽�¯) = 0∑︁ ∑︁
���� − �¯ � � = 𝛽
� − �¯)�
� (� �
90 Chapter 4. Statistics
Statistics and Machine Learning in Python, Release 0.3
beta
print("Using seaborn")
import seaborn as sns
sns.regplot(x="experience",
y="salary", data=salary);
y =491.486913 x +13584.043803, r : 0.538886, r-squared: 0.290398,
p-value: 0.000112, std_err: 115.823381
Regression line with the scatterplot
Using seaborn
3. 𝐹 -Test
1. Goodness of ht
The goodness of fit of a statistical model describes how well it fits a set of observations.
Mea- sures of goodness of fit typically summarize the discrepancy between observed
values and the values expected under the model in question. We will consider the
explained variance also known as the coefficient of determination, denoted 𝑅2 pronounced
R-squared.
The total sum of squares, 𝑆𝑆tot is the sum of the sum of squares explained by the
regression,
𝑆𝑆reg, plus the sum of squares of residuals unexplained by the regression, 𝑆𝑆res, also called
the SSE, i.e. such that
Fig. 4:
title
92 Chapter 4. Statistics
Statistics and Machine Learning in Python, Release 0.3
beta
The mean of � is
1 ∑︁
�¯ ��
�
= �.
The total sum of squares is the total squared sum of deviations from the mean of
�, i.e. ∑︁
𝑆𝑆 tot = (� − 2
�¯)� �
where �ˆ� = 𝛽�� + 𝛽0 is the estimated value of salary �ˆ� given a value of
experience ��. The sum of squares of the residuals, also called the residual sum
∑︁
of squares (RSS) is: �−
2
𝑆𝑆 res = (�
�ˆ)�. �
𝑅2 is the explained sum of squares of errors. It is the variance explain by the regression
divided by the total variance, i.e.
explained SS 𝑆𝑆 reg 𝑆𝑆 �𝑒
𝑅 2= = = 1− � .
total SS
𝑆𝑆��� 𝑆𝑆���
3.2 Test
Using the 𝐹-distribution, compute the probability of observing a value greater than 𝐹 under
𝐻 0 , i.e.: 𝑃(� > 𝐹|𝐻 0 ), i.e. the survival function (1 − Cumulative Distribution Function)
at � of the given 𝐹 -distribution.
Multiple regression
Theory
Given: a set of training data {�1, ..., �𝑁} with corresponding targets {�1, ...,
�𝑁}.
4.1. Univariate statistics 93
Statistics and Machine Learning in Python, Release 0.3 beta
In linear regression, we assume that the model that generates the data involves only a
linear combination of the input variables, i.e.
�(��, 𝛽) = 𝛽0 + 𝛽1�1 + ... + 𝛽𝑃�,𝑃
� �
or, simplified
∑︁
𝑃 −1
�(�
� , 𝛽) =0 𝛽 + 𝛽����
�=1 .
Extending each sample with an intercept, �� := [1, ��] ∈ 𝑅𝑃+ 1 allows us to use a more
general notation based on linear algebra and write it as a simple dot product:
�(��, 𝛽) =�
�𝑇𝛽,
where 𝛽 ∈ 𝑅𝑃+ 1 is a vector of weights that define the 𝑃+ 1 parameters of the model. From
now we have 𝑃regressors + the intercept.
Minimize the Mean Squared Error MSE
loss: ∑︁
𝑁 ∑︁𝑁
�− �(�, 𝛽)) �− �
2 𝑇 2
𝑀𝑆𝐸(𝛽) = (� (� �
1 1
�= 1= � 𝛽)
𝑁 �= 1 𝑁
Let 𝑋 = [� 𝑇 , ..., �
𝑇 ] be a 𝑁 × 𝑃+ 1 matrix of 𝑁 samples of 𝑃input features with one
0 𝑁
column
of one and let be � = [�1, ..., �𝑁] be a vector of the 𝑁 targets. Then, using linear algebra,
the
mean squared error (MSE) loss can be rewritten:
𝑀 𝑆𝐸(𝛽) = ||� − 𝑋𝛽||22 .
1𝑁
The 𝛽 that minimises the MSE can be found
by:
(︂ )︂
∇𝛽 ||� − 𝑋𝛽||22
= (4.25
1
0 )
1 𝑁 𝑇
(4.26
∇𝛽(� − 𝑋𝛽) (� − 𝑋𝛽) =
1
𝑁 )
𝑇 0 𝑇 𝑇 𝑇 𝑇
(4.27
∇𝛽(� � − 2𝛽 𝑋 � + 𝛽𝑋 𝑋𝛽) = 0
𝑁 )
𝑇 𝑇
−2𝑋 � + 2𝑋 𝑋𝛽 = 0 (4.28
𝑋 𝑋𝛽 = 𝑋 �
𝑇 𝑇 )
𝛽= (𝑋 𝑇 𝑋) − 1 𝑋 𝑇 �, (4.29
)
where (𝑋 𝑇 𝑋) − 1 𝑋 𝑇 is a pseudo inverse of 𝑋 .
(4.30
)
Fit with numpy
import numpy as np
from scipy import linalg
np.random.seed(seed=42) # make the example reproducible
(continues on next
page)
94 Chapter 4. Statistics
Statistics and Machine Learning in Python, Release 0.3
beta
Sources:
http://statsmodels.sourceforge.net/devel/examples/
Multiple regression
Interface with
import statsmodels.api as sm
statsmodels
## Fit and summary:
model =sm.OLS(y, X ) . f i t ( )
print(model.summary())
# residuals + prediction
== true values
assert np.all(ypred +
model.resid == y)
OLS Regression Results
==============================================================================
Dep. Variable: y R-squared: 0.363
Model: OLS Adj. R-squared: 0.322
Method: Least Squares F-statistic: 8.748
Date: Wed, 06 Nov 2019 Prob (F-statistic): 0.000106
(continues on next
page)
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly␣
˓→ specified.
Interface with
Pandas
Use R language syntax for data.frame. For an additive model:�� =
0 𝛽+ 1 �
�
2 𝛽+ 𝜖 ≡ y
1 𝛽+ 2 �
� �
~
x1 +
x2.
import statsmodels.formula.api as smfrmla
96 Chapter 4. Statistics
Statistics and Machine Learning in Python, Release 0.3
beta
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly␣
˓→ specified.
Analysis of covariance (ANCOVA) is a linear model that blends ANOVA and linear
regression. ANCOVA evaluates whether population means of a dependent variable (DV) are
equal across levels of a categorical independent variable (IV) often called a treatment,
while statistically controlling for the effects of other quantitative or continuous variables
that are not of primary interest, known as covariates (CV).
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
try:
df = pd.read_csv("../datasets/salary_table.csv")
except:
url ='https://raw.github.com/neurospin/pystatsml/master/datasets/salary_table.csv'
df = pd.read_csv(url)
<ipython-input-28-c2d69cab90c3> in <module>
8
(continues on next
page)
One-way AN(C)OVA
98 Chapter 4. Statistics
Statistics and Machine Learning in Python, Release 0.3
beta
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly␣
˓→ specified.
sum_sq df F PR(>F)
education 9.152624e+ 2.0 43.351589 7.672450e-
07 11
management 5.075724e+ 1.0 480.82539 2.901444e-
08 4 24
experience 3.380979e+ 1.0 320.28152 5.546313e-
08 4 21
Residual 4.328072e+ 41.0 NaN NaN
Comparing two nested07 models
oneway is nested within twoway. Comparing two nested models tells us if the additional
predic- tors (i.e. education) of the full model significantly decrease the residuals. Such
comparison can be done using an 𝐹-test on residuals:
print(twoway.compare_f_test(oneway)) # return F, pval, df
Factor coding
See
http://statsmodels.sourceforge.net/devel/contrasts.html
By default Pandas use “dummy coding”. Explore:
print(twoway.model.data.param_names)
print(twoway.model.data.exog[:10, : ] )
import numpy as np
np.random.seed(seed=42) # make example reproducible
# Dataset
n_samples, n_features =100, 1000
n_info =int(n_features/10) # number of features with information
n1, n2 =int(n_samples/2), n_samples - int(n_samples/2)
snr = .5
Y =np.random.randn(n_samples, n_features)
grp =np.array(["g1"] * n1 +["g2"] * n2)
#
import scipy.stats as stats
import matplotlib.pyplot as
plt
tvals, pvals =
np.full(n_features, np.NAN),
np.full(n_features, np.NAN)
for j in range(n_features):
t v a l s [ j] , pvals[j] =
stats.ttest_ind(Y[grp=="g1"
, j ] , Y[grp=="g2", j ] ,
e
q
u
a (continues on next
l page)
_
4.1. Univariate statistics v 101
a
r
=
Statistics and Machine Learning in Python, Release 0.3 beta
axis[2].hist([pvals[n_info:], pvals[:n_info]],
stacked=True, bins=100, label=["Negatives", "Positives"])
axis[2].set_xlabel("p-value histogram")
axis[2].set_ylabel("density")
axis[2].legend()
plt.tight_layout()
Note that under the null hypothesis the distribution of the p-values is
uniform. Statistical measures:
• True Positive (TP) equivalent to a hit. The test correctly concludes the presence of an
effect.
• True Negative (TN). The test correctly concludes the absence of an effect.
• False Positive (FP) equivalent to a false alarm, Type I error. The test improperly con-
cludes the presence of an effect. Thresholding at �-value < 0.05 leads to 47 FP.
• False Negative (FN) equivalent to a miss, Type II error. The test improperly
concludes the absence of an effect.
FDR-controlling procedures are designed to control the expected proportion of rejected null
hypotheses that were incorrect rejections (“false discoveries”). FDR-controlling
procedures pro- vide less stringent control of Type I errors compared to the familywise
error rate (FWER) con- trolling procedures (such as the Bonferroni correction), which
control the probability of at least one Type I error. Thus, FDR-controlling procedures have
greater power, at the cost of increased rates of Type I errors.
print("FDR
correction, FP: %i, TP: %i" % (FP, TP))
4.1.9 Exercises
Load the dataset: birthwt Risk Factors Associated with Low Infant Birth Weight at
https://raw. github.com/neurospin/pystatsml/master/datasets/birthwt.csv
1. Test the association of mother’s (bwt) age and birth weight using the correlation test
and linear regeression.
2. Test the association of mother’s weight (lwt) and birth weight using the correlation
test and linear regeression.
3.Produce two scatter plot of: (i) age by birth weight; (ii) mother’s weight by birth
weight. Conclusion ?
Compute:
• �¯: y_mu
• 𝑆𝑆tot: ss_tot
• 𝑆𝑆reg: ss_reg
• 𝑆𝑆res: ss_res
• Check partition of variance formula based on sum of squares by using assert np.
allclose(val1, val2, atol=1e-05)
• Compute 𝑅2 and compare it with the r_value above
• Compute the 𝐹 score
• Compute the �-value:
• Plot the 𝐹(1, �) distribution for 100 𝑓 values within [10, 25]. Draw 𝑃(𝐹 (1, > 𝐹),
�)
i.e. color the surface defined by the � values larger than 𝐹below the 𝐹(1, �).
• 𝑃(𝐹 (1, �) > 𝐹) is the �-value, compute it.
Multiple regression
# Dataset
N, P =50, 4
X =np.random.normal(size= N * P).reshape((N, P))
## Our model needs an intercept so we add a column of 1s:
X[: , 0] =1
print(X[:5, : ] )
(continues on next
page)
Given the following two sample, test whether their means are
equals.
height =np.array([ 1.83, 1.83, 1.73, 1.82, 1.83,
1.73,1.99, 1.85, 1.68, 1.87,
1.66, 1.71, 1.73, 1.64, 1.70,
1.60, 1.79, 1.73, 1.62, 1.77])
grp =np.array(["M"] * 10 +[ " F " ] * 10)
� = 𝑔+ 𝜀
Where the noise 𝜀 ∼ 𝑁 (1, 1) and 𝑔 ∈ {0, 1 } is a group indicator variable with 50 ones and
50
zeros.
• Write a function tstat(y, g) that compute the two samples t-test of y splited in two
groups defined by g.
• Sample the t-statistic distribution under the null hypothesis using random
permutations.
• Assess the p-value.
4.1. Univariate statistics 105
Statistics and Machine Learning in Python, Release 0.3 beta
Multiple comparisons
This exercise has 2 goals: apply you knowledge of statistics using vectorized numpy
operations. Given the dataset provided for multiple comparisons, compute the two-sample
�-test (assuming equal variance) for each (column) feature of the Y array given the two
groups defined by grp variable. You should return two vectors of size n_features: one for
the �-values and one for the
�-values.
ANOVA
y =np.zeros(n)
for k in grp:
y[label ==k] =np.random.normal(mu_k[k],
sd_k[k], n_k[k])
The study provides the brain volumes of grey matter (gm), white matter (wm) and
cerebrospinal fluid) (csf) of 808 anatomical MRI scans.
1. Manipulate data
WD=os.path.join(tempfile.gettempdir(), "brainvol")
os.makedirs(WD, exist_ok=True)
#os.chdir(WD)
base_url ='https://raw.github.com/neurospin/pystatsml/master/datasets/brain_volumes/%s'
data = dict()
for f i l e in ["demo.csv", "gm.csv", "wm.csv", "csf.csv"]:
urllib.request.urlretrieve(base_url % f i l e , os.path.join(WD, "data", f i l e ) )
Out:
brain_vol = brain_vol.dropna()
assert brain_vol.shape ==(766, 9)
import os
import pandas as pd
import seaborn as
sns
import
statsmodels.formul
a.api as smfrmla
import
statsmodels.api as
sm
brain_vol =pd.read_excel(os.path.join(WD,
Descriptive "data",
statistics Most of participants have "brain_vol.xlsx"),
several MRI sessions (column session)
sheet_name='data')
Select on rows from session one “ses-01”
# Round float at 2 decimals when printing
pd.options.display.float_format = '{:,.2f}'.format
brain_vol1 =brain_vol[brain_vol.session =="ses-01"]
# Check that there are no duplicates
assert len(brain_vol1.participant_id.unique()) ==
len(brain_vol1.participant_id)
Global descriptives statistics of numerical
variables
desc_glob_num =brain_vol1.describe()
print(desc_glob_num)
Out:
age gm_vol wm_vo csf_vol tiv_vol gm_f wm_f
l
count 244.00 244.00 244.00 244.00 244.00 244.00 244.00
mean 34.54 0.71 0.44 0.31 1.46 0.49 0.30
std 12.09 0.08 0.07 0.08 0.17 0.04 0.03
min 18.00 0.48 0.05 0.12 0.83 0.37 0.06
25% 25.00 0.66 0.40 0.25 1.34 0.46 0.28
50% 31.00 0.70 0.43 0.30 1.45 0.49 0.30
75% 44.00 0.77 0.48 0.37 1.57 0.52 0.31
max 61.00 1.03 0.62 0.63 2.06 0.60 0.36
Out:
site group sex
count 244 244 244
unique 7 2 2
top S7 Patient M
freq 65 157 155
Get count by level
site group sex
Contro nan 87.00 nan
l
F nan nan 89.00
M nan nan 155.00
Patient nan 157.00 nan
S1 13.00 nan nan
S3 29.00 nan nan
S4 15.00 nan nan
S5 62.00 nan nan
S6 1.00 nan nan
S7 65.00 nan nan
S8 59.00 nan nan
site grou se
Control
na p x
n 86.0 na
F nan 0 nan 88.00 n
M nan nan 155.0
0
Patient nan 157.0 nan
0
S1 13.0 nan nan
0
S3 29.0 nan nan
0
S4 15.0 statistics
Descriptives nan nan
of numerical variables per clinical
0
status
S5 62.0 =brain_vol1[["group",
nan nan
desc_group_num 'gm_vol']].groupby("group").describe()
0
print(desc_group_num)
S7 65.0 nan nan
0
Out:
S8 59.0 nan nan
0
gm_vol
count mean std min 25%50%75% max
group
Control 86.00 0.72 0.09 0.48 0.66 0.71 0.78 1.03
Patient 157.00 0.70 0.08 0.53 0.65 0.70 0.76 0.90
3. Statistics
Objectives:
1. Site effect of gray matter atrophy
2. Test the association between the age and gray matter atrophy in the control and
patient population independently.
3. Test for differences of atrophy between the patients and the controls
4. Test for interaction between age and clinical status, ie: is the brain atrophy process
in patient population faster than in the control population.
5. The effect of the medication in the patient population.
import statsmodels.api as sm
import statsmodels.formula.api as
smfrmla import scipy.stats
import seaborn as sns
Plot
sns.violinplot("site", "gm_f", data=brain_vol1)
Stats with
scipy
fstat, pval =scipy.stats.f_oneway(*[brain_vol1.gm_f[brain_vol1.site == s]
for s in brain_vol1.site.unique()])
print("Oneway Anova gm_f ~ site F=%.2f, p-value=%E" % (f stat , pval))
Out:
Stats with
statsmodels
anova =smfrmla.ols("gm_f ~ site", data=brain_vol1).fit()
# print(anova.summary())
print("Site explains %.2f%% of the grey matter fraction variance" %
(anova.rsquared * 100))
print(sm.stats.anova_lm(anova, typ=2))
Out:
Stats with
scipy
print("--- In control population ---")
beta, beta0, r_value, p_value, std_err =\
scipy.stats.linregress(x=brain_vol1_c
tl.age, y=brain_vol1_ctl.gm_f)
Out:
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors i s correctly␣
˓→ specified.
Age explains 10.57% of the grey matter fraction variance
--- In patient population ---
OLS Regression Results
==============================================================================
Dep. Variable: gm_f R-squared: 0.280
Model: OLS Adj. R-squared: 0.275
Method: Least Squares F-statistic: 60.16
Date: jeu., 31 oct. 2019 Prob (F-statistic): 1.09e-12
Time: 16:09:4 Log-Likelihood: 289.38
0
No. Observations: 157 AIC: -574.8
Df Residuals: 155 BIC: -568.7
Df Model: 1
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
Intercept 0.5569 0.009 60.81 0.000 0.539 0.575
7
age -0.0019 0.000 -7.756 0.000 -0.002 -
0.001
==============================================================================
Omnibus: 2.310 Durbin-Watson: 1.325
Prob(Omnibus): 0.315 Jarque-Bera ( J B ) : 1.854
Skew: 0.230 Prob(JB): 0.396
Kurtosis:
================================3.268
=========Cond.
======No.
============================111.
===
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors i s correctly␣
˓→ specified.
Age explains 27.96% of the grey matter fraction variance
Before testing for differences of atrophy between the patients ans the controls Preliminary
tests for age x group effect (patients would be older or younger than Controls)
Plot
Stats with
scipy
print(scipy.stats.ttest_ind(brain_vol1_ctl.age, brain_vol1_pat.age))
Out:
Ttest_indResult(statistic=-1.2155557697674162, pvalue=0.225343592508479)
Stats with
statsmodels
print(smfrmla.ols("age ~ group", data=brain_vol1).fit().summary())
print("No significant difference in age between patients and controls")
Out:
OLS Regression Results
==============================================================================
Dep. Variable: age R-squared: 0.006
Model: OLS Adj. R-squared: 0.002
Method: Least Squares F-statistic: 1.478
Date: jeu., 31 oct. 2019 Prob (F-statistic): 0.225
Time: 16:09:40 Log-Likelihood: -949.69
No. Observations: 243 AIC: 1903.
Df Residuals: 241 BIC: 1910.
Df Model: 1
Covariance Type: nonrobust
====================================================================================
coef std err t P>|t| [0.025 0.975]
(continues on next page)
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors i s correctly␣
˓→ specified.
No significant difference in age between patients and controls
Preliminary tests for sex x group (more/less males in patients than in Controls)
Out:
sum_ df F PR(>F
group
sq )
0.0 1.0 0.0 0.9
Residual 00.46 241.00
0 1 nan 2 nan
No significant difference in age between patients and controls
This model is simplistic we should adjust for age
and site
print(sm.stats.anova_lm(smfrmla.ols(
"gm_f ~ group +age +site", data=brain_vol1).fit(), typ=2))
print("No significant difference in age between patients and controls")
Out:
sum_sq df F PR(>F)
group 0.00 1.00 1.82 0.18
site 0.11 5.00 19.7 0.00
9
age 0.09 1.00 86.8 0.00
6
Residual 0.25 235.00 nan nan
No significant difference in age between patients and controls
4. Test for interaction between age and clinical status, ie: is the brain atrophy process in
patient population faster than in the control population.
print("%.3f%% of grey matter loss per year (almost %.1f%% per decade)" %\
(ancova.params.age * 100, ancova.params.age * 100 * 10))
Out:
sum_s df F PR(>F)
q
site 0.11 5.00 20.2 0.00
8
age 0.10 1.00 89.3 0.00
7
group:age
=Parameters = 0.00 1.00 3.28 0.07
Residual
Intercept 0.25 235.0 nan
0.52 nan
site[T.S3] 00.01
site[T.S4] 0.03
site[T.S5] 0.01
site[T.S7] 0.06
site[T.S8] 0.02
age -0.00
group[T.Patient]:age -0.00
dtype: float64
-0.148% of grey matter loss pe r year (almost -1.5% per decade)
grey matter loss in patients is accelerated by -0.232% per decade
Multivariate statistics includes all statistical techniques for analyzing samples made of
two or more variables. The data set (a 𝑁 × 𝑃matrix X) is a collection of 𝑁 independent
samples
Source: Wikipedia
Algebraic definition
The dot product, denoted ’‘·” of two 𝑃-dimensional vectors a = [𝑎1, 𝑎2, ..., 𝑎𝑃] and a = [𝑏1,
𝑏2, ..., 𝑏𝑃] is defined as
⎡ ⎤
𝑏1
..
𝑇 ∑︁ ︀[
a· b= a b= 𝑎�𝑏� = 𝑎1 ︀
]⎢ ⎥
a𝑇 . . . 𝑎𝑃 ⎢ b⎥ .
... ⎢ . ⎥
�
⎣ . ⎦
𝑏𝑃
√
‖a‖2 = a · a.
a · b = 0.
At the other extreme, if they are codirectional, then the angle between them is 0° and
a · b = ‖a‖2 ‖b‖ 2
Fig. 5:
Projection.
import numpy as np
np.random.seed(42)
a =np.random.randn(10)
b=
np.random.randn(10)
np.dot(a, b)
-4.085788532659924
3. Covariance matrix
wher
e
· · ·1 1 ∑︁
𝑁
𝑇
��� = � xj xk = ���
𝑁−1 𝑁 − 1 �= ��
=�� �
1
is an estimator of the covariance between the ��ℎ and ��ℎ
variables.
## Avoid warnings and force inline plot
%matplotlib inline
import warnings
warnings. filterwarnings(" ignore" )
##
import numpy as np
import scipy
import matplotlib.pyplot as
plt import seaborn as sns
import pystatsml.plot_utils
import seaborn as sns # nice
color
(continues on next
page)
mean[1] =np.array([2.5,
2.5])
Cov[1] =np.array([[1, .5],
[. 5,
1]])
mean[2] =np.array([-2.5,
-2.5])
Cov[2] =np.array([[1, .9],
[. 9,
1]])
mean[3] =np.array([2.5,
-2.5])
Cov[3] =np.array([[1, -.9],
[-.9,
1]])
# Generate dataset
for i in range(len(mean)):
X[i] =
np.random.multivariate_
normal(mean[i], Cov[i],
n_samples)
# Plot
for i in range(len(mean)):
# Points
plt.scatter(X[i][:, 0],
X [ i ] [ : , 1],
color=colors[i],
label="class %i" % i )
# Means
plt.scatter(mean[i][0], mean[i][1], marker="o", s=200, facecolors='w',
edgecolors=colors[i], linewidth=2)
# Ellipses representing the covariance matrices
pystatsml.plot_utils.plot_cov_ellipse(Cov[i], pos=mean[i],
facecolor='none',
linewidth=2, edgecolor=colors[i])
plt.axis('equal')
_ =plt.legend(loc='upper lef t')
import numpy as np
import pandas as
pd
import matplotlib.pyplot as
plt import seaborn as sns
url ='https://python-graph-gallery.com/wp-content/uploads/mtcars.csv'
df = pd.read_csv(url)
f , ax =plt.subplots(figsize=(5.5, 4.5))
cmap =sns.color_palette("RdBu_r", 11)
# Draw the heatmap with the mask and
correct aspect ratio
_ =sns.heatmap(corr, mask=None, cmap=cmap, vmax=1, center=0,
square=True, linewidths=.5, cbar_kws={"shrink": .5})
clusters =
[list(corr.columns[clustering.label
s_==lab]) for lab in
set(clustering.labels_
˓ → )]
print(clusters)
reordered =np.concatenate(clusters)
R =corr.loc[reordered, reordered]
[['mpg', 'cyl', 'disp', 'hp', 'wt', 'qsec', 'vs', 'carb'], ['am', 'gear'], ['drat']]
f , ax =plt.subplots(figsize=(5.5,
4.5))
# Draw the heatmap with the mask and
correct aspect ratio
_ =sns.heatmap(R, mask=None, cmap=cmap, vmax=1, center=0,
square=True, linewidths=.5, cbar_kws={"shrink": .5})
In statistics, precision is the reciprocal of the variance, and the precision matrix is the
matrix inverse of the covariance matrix.
It is related to partial correlations that measures the degree of association between two
vari- ables, while controlling the effect of other variables.
import numpy as np
for i , j in
zip(*np.triu_indices_from(Prec,
1)):
Pcor[i, j ] =- Prec[i, j ] /
np.sqrt(Prec[i, i ] * Prec[j,
j])
print(Pcor.round(2))
# Precision matrix:
0. 0. 0. ]
[ [[-3.21
6.79 -3.21 -3.21
6.79 -3.21 0. 0. 0. ]
[-3.21 -3.21 6.79 0. 0. 0. ]
[ 0. -0. -0. 5.26 -4.74 -0. ]
[ 0. 0. 0. -4.74 5.26 0. ]
[ 0. 0. 0. 0. 0. 1. ]]
# Partial correlations:
[[ nan 0.47 0.47 -0. -0. -0. ]
[ nan nan 0.47 -0. -0. -0. ]
[ nan nan nan -0. -0. -0. ]
[ nan nan nan 0.9 0. ]
nan
[ nan nan nan nan -0. ]
nan
[ nan nan nan nan nan nan]]
6. Mahalanobis distance
• The Mahalanobis distance is a measure of the distance between two points x and 𝜇
where the dispersion (i.e. the covariance structure) of the samples is taken into
account.
•The dispersion is considered through covariance
√︁
𝐷 𝑀 (x, 𝜇) = (x − 𝜇)𝑇 Σ − 1 (x − 𝜇).
Intuitions
• Distances along the principal directions of dispersion are contracted since they
correspond to likely dispersion of points.
• Distances othogonal to the principal directions of dispersion are dilated since they
corre- spond to unlikely dispersion of points.
For example
√
ones =np.ones(Cov.shape[0]) 𝐷 𝑀 (1) = 1𝑇Σ−1 1.
d_euc =np.sqrt(np.dot(ones, ones))
d_mah =np.sqrt(np.dot(np.dot(ones, Prec), ones))
The first dot product that distances along the principal directions of dispersion are
contracted:
print(np.dot(ones, Prec))
import numpy as
np import scipy
import matplotlib.pyplot as
plt import seaborn as sns
import pystatsml.plot_utils
%matplotlib
inline
np. random. seed(40
)
colors =
sns.color_palette
()
Covi = scipy.linalg.inv(Cov)
dm_m_x1 =scipy.spatial.distance.mahalanobis(mean, x1, Covi)
dm_m_x2 =scipy.spatial.distance.mahalanobis(mean, x2, Covi)
# Plot distances
vm_x1 = (x1 - mean) / d2_m_x1
vm_x2 = (x2 - mean) / d2_m_x2
j i t t e r =.1
plt.plot([mean[0] - j i t t e r ,
d2_m_x1 * vm_x1[0] - j i t t e r ] ,
[mean[1], d2_m_x1 * vm_x1[1]], color='k')
plt.plot([mean[0] - j i t t e r , d2_m_x2 * vm_x2[0] - j i t t e r ] ,
[mean[1], d2_m_x2 * vm_x2[1]],
color='k')
plt.legend(loc='lower right')
plt.text(-6.1, 3,
'Euclidian: d(m, x1) =%.1f<d(m, x2) =%.1f' % (d2_m_x1, d2_m_x2), color='k')
plt.text(-6.1, 3.5,
'Mahalanobis: d(m, x1) =%.1f>d(m, x2) =%.1f' % (dm_m_x1, dm_m_x2),
color='r')
126 Chapter 4. Statistics
plt.axis('equal')
print('Euclidian d(m, x1) =%.2f <d(m, x2) =%.2f' % (d2_m_x1,
d2_m_x2)) print('Mahalanobis d(m, x1) =%.2f >d(m, x2) =%.2f' % (dm_m_x1,
Statistics and Machine Learning in Python, Release 0.3
beta
If the covariance matrix is the identity matrix, the Mahalanobis distance reduces to the
Eu- clidean distance. If the covariance matrix is diagonal, then the resulting distance
measure is called a normalized Euclidean distance.
More generally, the Mahalanobis distance is a measure of the distance between a point x
and a distribution �(x|𝜇, Σ). It is a multi-dimensional generalization of the idea of
measuring how many standard deviations away x is from the mean. This distance is zero
if x is at the mean, and grows as x moves away from the mean: along each principal
component axis, it measures the number of standard deviations from x to the mean of the
distribution.
# x, y grid
x, y =np.mgrid[-3:3:.1,
-3:3:.1]
X =np.stack((x.ravel(),
y.ravel())).T
norm =
multivariate_normal_pdf(X,
mean,
sigma).reshape(x.shape)
# Do i t with scipy
norm_scpy =
multivariate_normal(mu,
sigma).pdf(np.stack((x, y) ,
axis=2))
assert np.allclose(norm,
norm_scpy)
# Plot
fig =plt.figure(figsize=(10, 7))
ax = fig.gca(projection='3d')
surf =ax.plot_surface(x, y, norm,
rstride=3,
cstride=3, cmap=plt.cm.coolwarm,
linewidth=1, antialiased=False
)
ax.set_zlim(0, 0.2)
ax.zaxis.set_major_locator(plt.LinearLocator(10))
ax.zaxis.set_major_formatter(plt.FormatStrFormatter('%.02f'))
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax. set_zlabel('p(x)')
8. Exercises
Two libraries:
• Pandas: https://pandas.pydata.org/pandas-docs/stable/timeseries.html
• scipy http://www.statsmodels.org/devel/tsa.html
1. Stationarity
# String as index
prices = {'apple': 4.99,
'banana': 1.99,
'orange': 3.99}
ser = pd.Series(prices) (continues on next
page)
0 1
1 3
dtype: int64
apple
4.99
banana
1.99
orange 3.99
dtype: float64
a1
b 2
dtype: int64
2
3. Time Series Analysis of Google Trends
source: https://www.datacamp.com/community/tutorials/time-series-analysis-tutorial
Get Google Trends data of keywords such as ‘diet’ and ‘gym’ and see how they vary over
time while learning about trends and seasonality in time series data.
In the Facebook Live code along session on the 4th of January, we checked out Google trends
data of keywords ‘diet’, ‘gym’ and ‘finance’ to see how they vary over time. We asked
ourselves if there could be more searches for these terms in January when we’re all
trying to turn over a new leaf?
In this tutorial, you’ll go through the code that we put together during the session step by
step. You’re not going to do much mathematics but you are going to do the following:
• Read data
• Recode data
• Exploratory Data Analysis
4. Read data
import numpy as np
import pandas as
pd
import matplotlib.pyplot as
plt import seaborn as sns
print(df. head())
# Rename
columns
df.columns =
['month',
'diet', 'gym',
'finance']
Month diet: (Worldwide) gym: (Worldwide) finance: (Worldwide)
0 2004-01 100 31 48
# 2004-02
1 Describe 75 26 49
print(df.
2 2004-03describe()) 67 24 47
3 2004-04 70 22 48
4 2004-05 72 22 43
diet gym finance
count 168.000000 168.00000 168.000000
0
mean 34.690476 47.148810
49.642857
std 8.134316 4.972547
8.033080
min 22.000000 38.000000
34.000000
25% 28.000000 44.000000
44.000000
50% 32.500000 46.000000
48.500000
75%
4.4.5 Recode data41.000000 50.000000
53.000000
max 58.000000 73.000000
Next, you’ll turn the ‘month’ column into a DateTime data type and make it the index of
100.000000
the DataFrame.
Note that you do this because you saw in the result of the .info() method that the ‘Month’
column was actually an of data type object. Now, that generic data type encapsulates
everything from strings to integers, etc. That’s not exactly what you want when you want
to be looking at time series data. That’s why you’ll use .to_datetime() to convert the
‘month’ column in your DataFrame to a DateTime.
Be careful! Make sure to include the inplace argument when you’re setting the index of the
DataFrame df so that you actually alter the original index and set it to the ‘month’ column.
df.month =pd.to_datetime(df.month)
df.set_index('month',
inplace=True)
print(df.head())
diet gym finance
month
2004-01-01 100 31 48
(continues on next
page)
You can use a built-in pandas visualization method .plot() to plot your data as 3 line plots
on a single figure (one for each column, namely, ‘diet’, ‘gym’, and ‘finance’).
df.plot()
plt. xlabel('Year') ;
# change figure
parameters
#
df.plot(figsize=(20
,10), linewidth=5,
fontsize=20)
# Plot single
column
# df[['diet']].plot(figsize=(20,10), linewidth=5, fontsize=20)
# plt.xlabel('Year', fontsize=20);
Note that this data is relative. As you can read on Google trends:
Numbers represent search interest relative to the highest point on the chart for the given
region and time. A value of 100 is the peak popularity for the term. A value of 50 means
that the term is half as popular. Likewise a score of 0 means the term was less than 1%
as popular as the peak.
Rolling average, for each time point, take the average of the points on either side of it.
Note that the number of points is specified by a window size.
diet_resamp_yr =diet.resample('A').mean()
diet_roll_yr = diet.rolling(12).mean()
<matplotlib.legend.Legend at 0x7f0db4e0a2b0>
diet_smooth =
np.array([x[(idx-win_half):
[<matplotlib.lines.Line2D
(idx+win_half)].mean() forat 0x7f0db4cfea90>]
idx in np.arange(win_
˓ → half, len(x))])
plt. plot(diet_smooth)
Text(0.5, 0, 'Year')
Detrendin
g
Text(0.5, 0, 'Year')
df.diff().plot()
plt. xlabel('Year')
Text(0.5, 0, 'Year')
df.plot()
plt. xlabel('Year') ;
print(df.corr())
<matplotlib.axes._subplots.AxesSubplot at 0x7f0db29f3ba8>
‘diet’ and ‘gym’ are negatively correlated! Remember that you have a seasonal and a trend
component. From the correlation coefficient, ‘diet’ and ‘gym’ are negatively correlated:
• trends components are negatively correlated.
•seasonal components would positively correlated and their
print(df.diff().corr())
Plot correlation
matrix
sns.heatmap(df.diff().corr(), cmap="coolwarm")
<matplotlib.axes._subplots.AxesSubplot at 0x7f0db28aeb70>
x = gym
plt.subplot(411)
plt.plot(x, label='Original')
plt.legend(loc='best')
plt.subplot(412)
plt.plot(trend, label='Trend')
plt.legend(loc='best')
plt.subplot(413)
plt.plot(seasonal,label='Seas
onality')
plt. legend(loc='best'
) plt.subplot(414)
plt.plot(residual,
label='Residuals')
plt. legend(loc='best'
) plt.tight_layout()
4.4.10 Autocorrelation
A time series is periodic if it repeats itself at equally spaced intervals, say, every 12
months. Autocorrelation Function (ACF): It is a measure of the correlation between the
T S with a lagged version of itself. For instance at lag 5, ACF would compare series at
time instant t1. . . t2 with series at instant t1-5. . . t2-5 (t1-5 and t2 being end
points).
Plot
# from pandas.plotting import autocorrelation_plot
from pandas.plotting import autocorrelation_plot
(continues on next
page)
<matplotlib.axes._subplots.AxesSubplot at 0x7f0db25b2dd8>
/home/edouard/anaconda3/lib/python3.7/site-packages/statsmodels/tsa/stattools.py:541:␣
˓→FutureWarning: fft=True will become the default in a future version of statsmodels. To␣
ACF peaks every 12 months: Time series is correlated with itself shifted by 12 months.
4.4.11 Time Series Forecasting with Python using Autoregressive Moving Average
(ARMA) models
Source:
• https://www.packtpub.com/mapt/book/big_data_and_business_intelligence/
9781783553358/7/ch07lvl1sec77/arma-models
• http://en.wikipedia.org/wiki/Autoregressive%E2%80%93moving-average
_model
• ARIMA:
https://www.analyticsvidhya.com/blog/2016/02/
time-series-forecasting-codes-python/
ARMA models are often used to forecast a time series. These models combine
autoregressive and moving average models. In moving average models, we assume that a
variable is the sum of the mean of the time series and a linear combination of noise
components.
∑︁� models
The autoregressive and moving average ∑︁can have different orders. In general, we
�
� = 𝑎 + 𝜀�−�+ and
can define an ARMA model with p autoregressive𝑏�
� � terms 𝜀 q moving average terms as
follows: � � �−� � �
Choosing p and q
Plot the partial autocorrelation functions for an estimate of p, and likewise using the
autocorre- lation functions for an estimate of q.
Partial Autocorrelation Function (PACF): This measures the correlation between the T S
with a lagged version of itself but after eliminating the variations already explained by
the intervening
comparisons. Eg at lag 5, it will check the correlation but remove the effects already
explained by lags 1 to 4.
from statsmodels.tsa.stattools import acf, pacf
x = df["gym"].astype(float)
#Plot ACF:
plt.subplot(121)
plt. plot(lag_acf)
plt.axhline(y=0,li
nestyle='--',col
or='gray')
plt.axhline(y=-1.96/np.sqrt(len(x_diff)),linestyle='--',color='gray')
plt.axhline(y=1.96/np.sqrt(len(x_diff)),linestyle='--',color='gray')
plt.title('Autocorrelation Function (q=1)')
#Plot PACF:
plt.subplot(122)
plt. plot(lag_pacf)
plt.axhline(y=0,lin
estyle='--',color
='gray')
plt.axhline(y=-1.96/np.sqrt(len(x_diff)),linestyle='--',color='gray')
plt.axhline(y=1.96/np.sqrt(len(x_diff)),linestyle='--',color='gray')
plt.title('Partial Autocorrelation Function (p=1)') plt.tight_layout()
In this plot, the two dotted lines on either sides of 0 are the confidence interevals. These
can be used to determine the p and q values as:
• p: The lag value where the PACF chart crosses the upper confidence interval for
the first
1. Define the model by calling ARMA() and passing in the p and q parameters.
2. The model is prepared on the training data by calling the f i t ( ) function.
3. Predictions can be made by calling the predict() function and specifying the index of
the time or times to be predicted.
print(model.summary())
plt.plot(x)
plt.plot(model.predict(), color='red')
plt.title('RSS: %.4f'% sum((model.fittedvalues-
x)**2))
/home/edouard/anaconda3/lib/python3.7/site-packages/statsmodels/tsa/base/tsa_model.
˓→py:165: ValueWarning: No frequency information was provided, so inferred frequency MS␣
˓ → will be used.
% freq, ValueWarning)
/home/edouard/anaconda3/lib/python3.7/site-packages/statsmodels/tsa/kalmanf/kalmanfilter.
˓→py:221: RuntimeWarning: divide by zero encountered in true_divide
Z_mat, R_mat, T_mat)
FIVE
MACHINE LEARNING
1. Introduction
Decompose the data matrix X 𝑁 ×𝑃 into a product of a mixing matrix U 𝑁 ×𝐾 and a dictionary
matrix V𝑃 × 𝐾 .
X = UV𝑇 ,
X ≈ X=
ˆ UV𝑇 ,
147
Statistics and Machine Learning in Python, Release 0.3 beta
X = UDV𝑇 ,
where ⎡ ⎤ ⎡ ⎤
�1𝑃 �1𝐾 ⎤ ⎡ ⎤
⎡𝑑 0
⎢ ⎥ ⎢ ⎥ 1
� X �11 ⎥ = ⎢ U D ⎦ ⎣ �11 �1𝑃
⎦ .
⎢ 11 ⎣ V𝑇
⎣ ⎦ ⎣ ⎦⎥ 0 𝑑𝐾 �𝐾1 �𝐾𝑃
�𝑁 1 �𝑁𝑃 �𝑁 1 �𝑁
𝐾
U: right-singular
• V = [v1, · · · , v𝐾 ] is a 𝑃 × 𝐾 orthogonal matrix.
• It is a dictionary of patterns to be combined (according to the mixing coefficients) to
reconstruct the original samples.
• V perfoms the initial rotations (projection) along the 𝐾 = min(𝑁 , 𝑃) principal
compo- nent directions, also called loadings.
• Each v� performs the linear combination of the variables that has maximum sample
vari- ance, subject to being uncorrelated with the previous v�−1.
D: singular values
• D is a 𝐾 × 𝐾 diagonal matrix made of the singular values of X with 𝑑1≥ 𝑑2≥ · · · ≥
𝑑𝐾≥ 0.
• D scale the projection along the coordinate axes by 𝑑1, 𝑑2, · · · , 𝑑𝐾 .
X. V: left-singular vectors
• U = [u1, · · · , u𝐾 ] is an 𝑁 × 𝐾 orthogonal matrix.
• Each row vi provides the mixing coefficients of dictionary
items to reconstruct the sample
xi
• It may be understood as the coordinates on the new orthogonal basis (obtained after
the initial rotation) called principal components in the PCA.
V transforms correlated variables (X) into a set of uncorrelated ones (UD) that better
expose the various relationships among the original data items.
X = UDV𝑇 , (5.1)
X V = UDV𝑇 V, (5.2)
X V = UDI, (5.3)
X V = UD (5.4)
At the same time, SVD is a method for identifying and ordering the dimensions along
which data points exhibit the most variation.
import numpy as
np import scipy
from sklearn.decomposition import
PCA import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
np.random.seed(42)
# dataset
n_samples = 100
experience =
np.random.norm
al(size=n_sampl
es)
salary =1500 +experience +np.random.normal(size=n_samples, scale=.5) X
=np.column_stack([experience, salary])
plt.subplot(131)
plt.scatter(U[:, 0], U[:, 1], s=50)
plt.axis('equal')
plt.titl e("U : Rotated and scaled
data")
plt. subplot(132)
# Project data
PC =np.dot(X, Vh.T) plt.scatter(PC[:,
0], PC[:, 1], s=50) plt.axis('equal')
plt.title("XV: Rotated data")
plt. xlabel(" PC1" )
plt.subplot(133)
plt.scatter(X[:,
0], X [:, 1],
s=50)
for i in
range(Vh.shape[0
]):
plt.arrow(x=0, y=0, dx=Vh[i, 0], dy=Vh[i, 1], head_width=0.2,
head_length=0.2, linewidth=2, fc='r', ec='r')
plt.text(Vh[i, 0], Vh[i, 1],'v%i' % (i+1), color="r", fontsize=15,
horizontalalignment='right', verticalalignment='top')
plt.axis('equal')
plt.ylim(-4, 4)
plt.tight_layout()
Sources:
• C. M. Bishop Pattern Recognition and Machine Learning, Springer, 2006
• Everything you did and didn’t know about PCA
• Principal Component Analysis in 3 Simple Steps
Principles
• Principal components analysis is the main method used for linear dimension
reduction.
• The idea of principal component analysis is to find the 𝐾 principal components di-
rections (called the loadings) V 𝐾 × 𝑃 that capture the variation in the data as much as
possible.
into a set of 𝑁 𝐾-dimensional samples C 𝑁 × 𝐾 , where the 𝐾 < 𝑃. The new variables
areconverts
• linearly
It uncorrelated.
a set of 𝑁 𝑃The columns of
-dimensional C 𝑁 × 𝐾 are called
observations N 𝑁 ×𝑃the
of possibly
principalcorrelated
components.
• variables
The dimension reduction is obtained by using only 𝐾 < 𝑃components that exploit
corre- lation (covariance) among the original variables.
• PCA is mathematically defined as an orthogonal linear transformation V 𝐾 × 𝑃 that
trans- forms the data to a new coordinate system such that the greatest variance by
some projec- tion of the data comes to lie on the first coordinate (called the first
principal component), the second greatest variance on the second coordinate, and so
on.
C 𝑁 ×𝐾 = X 𝑁 ×𝑃 V𝑃 ×𝐾
• PCA can be thought of as fitting a 𝑃-dimensional ellipsoid to the data, where each axis
of the ellipsoid represents a principal component. If some axis of the ellipse is
small, then the variance along that axis is also small, and by omitting that axis and
its corresponding prin- cipal component from our representation of the dataset, we
lose only a commensurately small amount of information.
• Finding the 𝐾 largest axes of the ellipse will permit to project the data onto a space
having dimensionality 𝐾 < 𝑃while maximizing the variance of the projected data.
Dataset preprocessing
Centering
Consider a data matrix, X , with column-wise zero empirical mean (the sample mean of
each column has been shifted to zero), ie. X is replaced by X − 1x¯𝑇.
Standardizing
Optionally, standardize the columns, i.e., scale them by their standard-deviation. Without
stan- dardization, a variable with a high variance will capture most of the effect of the PCA.
The principal direction will be aligned with this variable. Standardization will, however,
raise noise variables to the save level as informative variables.
The covariance matrix of centered standardized data is the correlation matrix.
To begin with, consider the projection onto a one-dimensional space (𝐾 = 1). We can define
the direction of this space using a 𝑃 -dimensional vector v, which for convenience (and
without loss of generality) we shall choose to be a unit vector so that ‖v‖ 2 = 1 (note
that we are only
interested in the direction defined by v, not in the magnitude of v itself). PCA consists of
two mains steps:
Projection in the directions that capture the greatest variance
Each 𝑃-dimensional data point x� is then projected onto v, where the coordinate (in the
ordinate system of v) is a scalar value, namely x
co- � v. I.e., we want to find the vector v that
𝑇
maximizes these coordinates along v, which we will see corresponds to maximizing the
vari-
ance of the projected data. This is equivalently expressed as
∑︁ ︀( ︀) 2
v = arg x𝑇�v .
1
max ‖v‖=1 𝑁 �
We can write this in matrix form
as
v = arg max ‖Xv‖2 = v𝑇X 𝑇Xv = v 𝑇S X X v,
1 1
‖v‖= 1 𝑁 𝑁
where SXX is a biased estiamte of the covariance matrix of the data,
i.e. 1
SX X = X𝑇 X.
𝑁
We now maximize the projected variance v𝑇SXX v with respect to v. Clearly, this has to be a
constrained maximization to prevent ‖v 2 ‖ → ∞. The appropriate constraint comes from
normalization
the condition ‖v‖2 ≡ ‖ v‖ 22= v v
𝑇 = 1 . To enforce this constraint, we introduce
Lagrange multiplier that we shall denote bya 𝜆, and then make an unconstrained
maximization of
By setting the gradient with respect to v equal to zero, we see that this quantity has a
stationary point when
SXX v = 𝜆v.
v𝑇SXX v = 𝜆,
Back to SVD
X𝑇 X = (UDV 𝑇 )𝑇 (UDV 𝑇 )
= VD𝑇 U𝑇 UDV𝑇
= VD 2 V 𝑇
V𝑇 X𝑇 XV = D 2
1 1
V𝑇 X𝑇 X V = 2
𝑁−1 𝑁 − 1D
1
V𝑇 S X X V 2
𝑁 − 1D
=
.
Considering only the ��ℎ right-singular vectors v�associated to the singular
value 𝑑� 1 2
vk𝑇SXXvk = 𝑑�,
𝑁−1
It turns out that if you have done the singular value decomposition then you already have
the Eigenvalue decomposition for X𝑇 X. Where - The eigenvectors of SXX are equivalent to
the right singular vectors, V, of X. - The eigenvalues, 𝜆� , of SXX , i.e. the variances
of the
components, are equal to 𝑁1 −1
times the squared singular values, 𝑑� .
Moreover computing PCA with SVD do not require to form the matrix X𝑇 X, so computing
the SVD is now the standard way to calculate a principal components analysis from a data
matrix, unless only a handful of components are required.
PCA outputs
The SVD or the eigendecomposition of the data covariance matrix provides three main
quanti- ties:
1. Principal component directions or loadings are the eigenvectors of X𝑇 X. The V 𝐾 × 𝑃
or the right-singular vectors of an SVD of X are called principal component directions
of
X. They are generally computed using the SVD of X.
2. Principal components is the 𝑁 × 𝐾 matrix C which is obtained by projecting X onto
the principal components directions, i.e.
C 𝑁 ×𝐾 = X 𝑁 ×𝑃 V𝑃 × 𝐾 .
𝑇
C 𝑁 ×𝐾 = 𝑁 ×𝑃 V (5.5
𝑃 ×𝐾
UDV
= 𝑇 )
𝑁 ×𝐾 𝑁 ×𝐾 I
𝐾×𝐾
C 𝑁 ×𝐾 UD 𝑇 (5.6
𝑁 ×𝐾
= )
UD (5.7
C )
Thus c� = Xv� = u�𝑑� , for � = 1, . . . 𝐾 . Hence u� is simply the projection of the
(5.8
row vectors of
)
X, i.e., the input predictor vectors,⎡ on the direction v� , scaled
⎤ by 𝑑� .
�1,1�1,1 + . . . +1,𝑃�1,𝑃
⎢ � ⎥
c1 = �2,1�1,1 + . . . + �2,𝑃
.. ⎥
�1,𝑃 ⎦
�𝑁,1�1,1 + . . . + �𝑁,𝑃 �1,𝑃
3. The variance of each component is given by the eigen values 𝜆�, � = 1, . . . 𝐾 . It can
be obtained from the singular values:
1
�𝑎�(c
�) (Xv�) 2 (5.9
𝑁−1
= )
1
= (u�𝑑 ) 2 (5.10
𝑁−1 �
)
1
= �
2 (5.11
𝑁 − 1𝑑
)
We must choose 𝐾 * ∈ [1, . . . , 𝐾], the number of required components. This can be done
by calculating the explained variance ratio of the 𝐾 * first components and by choosing 𝐾 *
such that the cumulative explained variance ratio is greater than some given threshold
(e.g., ≈ 90%). This is expressed as
∑︀* 𝐾 �𝑎�(c
�
�
cumulative explained � ∑︀ 𝐾 ) .
variance(c ) = � �𝑎�(c
�
)
PCs
Plot the samples projeted on first the principal components as e.g. PC1 against PC2.
PC directions
Exploring the loadings associated with a component provides the contribution of each
original variable in the component.
Remark: The loadings (PC directions) are the coefficients of multiple regression of PC on
origi- nal variables:
c = Xv (5.12
)
X𝑇 c = X𝑇 Xv
(5.13
(X𝑇 X) − 1 X 𝑇 c = v
)
(5.14
Another way to evaluate the contribution of the original variables in each PC can be
) i.e.
obtained by computing the correlation between the PCs and the original variables,
columns of X, denoted x� , for � = 1, . . . , 𝑃 . For the ��ℎ PC, compute and plot the
correlations with all original variables
𝑐��(c�, x� ), � = 1 . . . 𝐾, � = 1 . . . 𝐾.
np.random.seed(42)
# dataset
n_samples = 100
experience =
np.random.norm
al(size=n_sampl
es)
salary =1500 +experience +np.random.normal(size=n_samples, scale=.5) X
=np.column_stack([experience, salary])
PC = pca.transform(X)
plt.subplot(121)
plt.scatter(X[:, 0], X[: , 1])
plt.xlabel("x1");
plt.ylabel("x2")
plt.subplot(122)
plt.scatter(PC[:, 0], PC[:, 1])
plt.xlabel("PC1 (var=%.2f)" % pca.explained_variance_ratio_[0])
plt.ylabel("PC2 (var=%.2f)" % pca.explained_variance_ratio_[1])
[0.93646607 0.06353393]
plt.axis('equal')
plt.tight_layout()
Resources:
• http://www.stat.pitt.edu/sungkyu/course/2221Fall13/lec8_mds_combined.pdf
• https://en.wikipedia.org/wiki/Multidimensional_scaling
• Hastie, Tibshirani and Friedman (2009). The Elements of Statistical Learning: Data
Mining, Inference, and Prediction. New York: Springer, Second Edition.
The purpose of MDS is to find a low-dimensional projection of the data in which the
pairwise distances between data points is preserved, as closely as possible (in a least-
squares sense).
• Let D be the (𝑁 × 𝑁 ) pairwise distance matrix where 𝑑�� is a distance between
and �.�
points
• The MDS concept can be extended to a wide variety of data types specified in terms of
a similarity matrix.
Given the dissimilarity (distance) matrix D 𝑁 ×𝑁 = [𝑑�� ], MDS attempts to find 𝐾-
dimensional projections of the 𝑁 points x1, . . . , x𝑁 ∈ R 𝐾 , concatenated in an X 𝑁 × 𝐾
matrix, so that 𝑑�� ≈
‖x�− x� ‖ are as close as possible. This can be obtained by the minimization of a loss
∑︁
function called the stress function
stress(X) = (𝑑� −� ‖x −� x ‖) .2
�̸=� �
The Sammon mapping performs better at preserving small distances compared to the least-
squares scaling.
Example
The eurodist datset provides the road distances (in kilometers) between 21 cities in
Europe. Given this matrix of pairwise (non-Euclidean) distances D = [𝑑�� ], MDS can be
used to recover the coordinates of the cities in some Euclidean referential whose
orientation is arbitrary.
import pandas as
pd import numpy as
np
import
matplotlib.pyplot
as plt
# Pairwise distance
between European
cities
try:
url ='../datasets/eurodist.csv'
df = pd.read_csv(url)
except:
url ='https://raw.github.com/neurospin/pystatsml/master/datasets/eurodist.csv'
df = pd.read_csv(url)
print(df.iloc[:5, :5])
city =df["city"]
D =np.array(df.iloc[:,
1:]) # Distance matrix
# Arbitrary choice of
city Athens Barcelona Brussels Calais 0
K=2 components
from Athens 0 3313 2963 3175
1 Barcelona
sklearn.manifold 3313 0
1318
import MDS 1326 (continues on next
mds = page)
MDS(dissimilarity='prec
omputed',
5.1. Dimension reduction and feature extraction 157
n_components=2,
random_state=40,
max_iter=3000,␣
Statistics and Machine Learning in Python, Release 0.3 beta
Plot the
results:
# Plot: apply some rotation and f lip
theta =80 * np.pi / 180.
rot =np.array([[np.cos(theta),
-np.sin(theta)],
[np.sin(theta), np.cos(theta)]])
Xr =np.dot(X, rot)
# f lip x
Xr[:, 0] *= -1
plt.scatter(Xr[:, 0], Xr[:, 1])
for i in range(len(city)):
plt.text(Xr[i, 0], X r[ i , 1], c i t y [ i ] )
plt.axis('equal')
(-1894.1017744377398,
2914.3652937179477,
-1712.9885463201906,
2145.4522453884565)
We must choose 𝐾 * ∈ {1, . . . , 𝐾 } the number of required components. Plotting the values
the stress function, obtained using � ≤ 𝑁 − 1 components. In general, start with
of
1, . . . 𝐾𝐾≤* where
Choose 4. you can clearly distinguish an elbow in the stress curve.
Thus, in the plot below, we choose to retain information accounted for by the first two
compo- nents, since this is where the elbow is in the stress curve.
k_range =range(1, min(5, D.shape[0]-1))
stress =[MDS(dissimilarity='precomputed', n_components=k,
random_state=42, max_iter=300, eps=1e-9).fit(D).stress_ for k in k_range]
print(stress)
plt.plot(k_range, stress)
plt.xlabel("k")
plt.ylabel("stress")
Sources:
• Scikit-learn documentation
• Wikipedia
Isomap
ax =fig.add_subplot(121, projection='3d')
ax.scatter(X[:, 0], X[ :, 1], X[ :, 2], c=color, cmap=plt.cm.Spectral)
ax.view_init(4, -72)
plt.title('2D "S shape" manifold in 3D')
Y =manifold.Isomap(n_neighbors=10, n_components=2).fit_transform(X)
ax = fig.add_subplot(122)
plt.scatter(Y[:, 0], Y[: , 1], c=color, cmap=plt.cm.Spectral)
plt.title("Isomap")
plt.xlabel("First component")
plt.ylabel("Second component")
plt.axis('tight')
5.1.6 Exercises
PCA
• f i t( X ) that estimates the data mean, principal components directions V and the
explained variance of each component.
• transform(X) that projects the data onto the principal components.
Check that your BasicPCA gave similar results, compared to the results from sklearn.
MDS
5.2 Clustering
Wikipedia: Cluster analysis or clustering is the task of grouping a set of objects in such a
way that objects in the same group (called a cluster) are more similar (in some sense or
another) to each other than to those in other groups (clusters). Clustering is one of the
main task of exploratory data mining, and a common technique for statistical data
analysis, used in many fields, including machine learning, pattern recognition, image
analysis, information retrieval, and bioinformatics.
Sources: http://scikit-learn.org/stable/modules/clustering.html
∑︁
𝑁 ∑︁ 𝐾
2
𝐽(�, 𝜇) = ��� − 𝜇�
‖ 2
� � ‖��
which represents the sum of the squares of the Euclidean distances of each data point to
its assigned vector 𝜇� . Our goal is to find values for the {��� } and the {𝜇� } so as to
minimize the function 𝐽. We can do this through an iterative procedure in which each
iteration involves two successive steps corresponding to successive optimizations with
respect to the �� � and the 𝜇 �
. First we choose some initial values for the 𝜇� . Then in the first phase we minimize 𝐽
with respect to the �� � , keeping the 𝜇� fixed. In the second phase we minimize 𝐽with
respect to the 𝜇� , keeping �� � fixed. This two-stage optimization process is then repeated
until conver- gence. We shall see that these two stages of updating �� � and 𝜇� correspond
respectively to the expectation (E) and maximization (M) steps of the expectation-
maximisation (EM) algorithm, and to emphasize this we shall use the terms E step and
M step in the context of the 𝐾-means algorithm.
Consider first the determination of the �� � . Because 𝐽in is a linear function of �� � , this
opti- mization can be performed easily to give a closed form solution. The terms involving
different
� are independent and so we can{︃ optimize for each � separately by choosing �� � to be 1
for whichever value of � gives the 1, minimum
if � = argvalue of �| �
min |�
��−− 𝜇� || . In other words, we
2
�, and so this result has a simple interpretation, namely set 𝜇� equal to the mean of all
of the data points
�� assigned to cluster �. For this reason, the procedure is known as the 𝐾-means
algorithm.
The two phases of re-assigning data points to clusters and re-computing the cluster means
are repeated in turn until there is no further change in the assignments (or until some
maximum number of iterations is exceeded). Because each phase reduces the value of the
objective func- tion 𝐽, convergence of the algorithm is assured. However, it may converge
from sklearn import cluster, datasets
to a local rather than global minimum of 𝐽.
import matplotlib.pyplot as plt
import seaborn as sns # nice color
%matplotlib inline
i r i s =datasets.load_iris()
X =iris.data[:, :2] # use only 'sepal length and sepal width'
y_iris =iris.target
km2 = cluster.KMeans(n_clusters=2).fit(X)
km3 = cluster.KMeans(n_clusters=3).fit(X)
km4 =cluster.KMeans(n_clusters=4).fit(X)
plt.figure(figsize=(9, 3))
plt.subplot(131)
plt.scatter(X[:, 0], X[: , 1], c=km2.labels_)
plt.title("K=2, J=%.2f" % km2.inertia_)
plt.subplot(132)
plt.scatter(X[:, 0], X[: , 1], c=km3.labels_)
plt.title("K=3, J=%.2f" % km3.inertia_)
plt.subplot(133)
plt.scatter(X[:, 0], X[: , 1], c=km4.labels_)#.astype(np.float))
plt.title("K=4, J=%.2f" % km4.inertia_)
Exercises
1. Analyse clusters
• Analyse the plot above visually. What would a good value of 𝐾 be?
• If you instead consider the inertia, the value of 𝐽 , what would a good value of 𝐾
be?
• Explain why there is such difference.
• For 𝐾 = 2 why did 𝐾-means clustering not find the two “natural” clusters? See the
assumptions of 𝐾-means: http://scikit-learn.org/stable/auto_examples/cluster/plot_
kmeans_assumptions.html#example-cluster-plot-kmeans-assumptions-py
Write a function kmeans(X, K) that return an integer vector of the samples’ labels.
2. Gaussian mixture models
To compute the classes parameters: �(�), 𝜇�, Σ� we sum over all samples, by weighting
sample
each � by its responsibility or contribution
∑︀ to class �: �(�| ��) such that for each
contribution
point its to all classes sum to one ��( �| �� ) = 1. This contribution is the
conditional
probability of class � given �: �(�| �) (sometimes called the posterior). It can be
computed using Bayes’ rule:
�(�|
�(�| (5.16
�)�(�)
�(
�) = )
�)�(� | 𝜇�, Σ� )
∑︀
�(�)
𝐾 (5.17
�= �(� | 𝜇�, Σ� ) )
= 1 �(�)
Since the class parameters, �(�), 𝜇� and Σ� , depend on the responsibilities �(� | �)
and the responsibilities depend on class parameters, we need a two-step iterative
algorithm: the expectation-maximization (EM) algorithm. We discuss this algorithm
next.
### The expectation-maximization (EM) algorithm for Gaussian mixtures
Given a Gaussian mixture model, the goal is to maximize the likelihood function with
respect to the parameters (comprised of the means and covariances of the components and
the mixing coefficients).
Initialize the means 𝜇� , covariances Σ� and mixing coefficients �(�)
1. E step. For each sample �, evaluate the responsibilities for each class � using the
current parameter values �( �� |�
𝜇, Σ )
�(�|� ∑︀ 𝐾
�(�)�
�) = �=1 �(�� | 𝜇�, Σ� )
�(�)
2. M step. For each class, re-estimate the parameters using the current
responsibilities
1 ∑︁
𝑁
𝜇 new
� �(�|� (5.18
= 𝑁� �= � )�� )
1 ∑︁
1𝑁
new
Σ� �(� �| � )(�new new 𝑇
� − 𝜇 �− 𝜇 � (5.19
= 𝑁� �= � ) ) )
1 (�
�new(�) (5.20
𝑁
𝑁 )
= �
3. Evaluate the log-
likelihood {︃ 𝐾 }︃
︁∑𝑁 ︁∑
ln �(�|𝜇�, Σ� )�(�) ,
�=1 �=1
and check for convergence of either the parameters or the log-likelihood. If the convergence
criterion is not satisfied return to step 1.
import numpy as np
from sklearn import datasets
import matplotlib.pyplot as plt
import seaborn as sns # nice color
import sklearn
(continues on next
page)
import
pystatsml.plot_utils
colors =sns.color_palette()
i r i s =datasets.load_iris()
X =iris.data[:, :2] # 'sepal length (cm)''sepal width (cm)'
y_iris =iris.target
plt.figure(figsize=(9, 3))
plt.subplot(131)
plt.scatter(X[:, 0], X [:,
1], c=[colors[lab] for lab
in gmm2.predict(X)])#,
color=colors)
for i in range(gmm2.covariances_.shape[0]):
pystatsml.plot_utils.plot_cov_ellipse(cov=gmm2.covariances_[i, : ] , pos=gmm2.means_[i,␣
˓→:],
plt.subplot(132)
plt.scatter(X[:, 0], X[: , 1], c=[colors[lab] for lab in
gmm3.predict(X)])
for i in range(gmm3.covariances_.shape[0]):
pystatsml.plot_utils.plot_cov_ellipse(cov=gmm3.covariances_[i, : ] , pos=gmm3.means_[i,␣
˓→:],
plt.subplot(133)
plt.scatter(X[:, 0], X[: , 1], c=[colors[lab] for lab in
gmm4.predict(X)]) # .astype(np.
˓ → float))
for i in range(gmm4.covariances_.shape[0]):
pystatsml.plot_utils.plot_cov_ellipse(cov=gmm4.covariances_[i, : ] , pos=gmm4.means_[i,␣
˓→:],
bic =l i s t ( )
#print(X)
ks =
np.arange(1,
10)
for k in ks:
gmm =GaussianMixture(n_components=k, covariance_type='full')
gmm.fit(X)
bic.append(gmm.bic(X))
k_chosen = ks[np.argmin(bic)]
plt.plot(ks, bic)
plt.xlabel("k")
plt. ylabel(" BIC" )
print("Choose
k=", k_chosen)
Choose k= 2
4. Hierarchical clustering
Ward clustering
image.
from sklearn import cluster, datasets
import matplotlib.pyplot as plt
import seaborn as sns # nice color
i r i s =datasets.load_iris()
X =iris.data[:, :2] # 'sepal length (cm)''sepal width (cm)'
y_iris =iris.target
plt.figure(figsize=(9, 3))
plt.subplot(131)
plt.scatter(X[:, 0], X[: , 1], c=ward2.labels_)
plt.title("K=2")
plt.subplot(132)
plt.scatter(X[:, 0], X[: , 1], c=ward3.labels_)
plt.title("K=3")
plt.subplot(133)
plt.scatter(X[:, 0], X[: , 1], c=ward4.labels_) # .astype(np.float))
plt.title("K=4")
5.2.5 Exercises
Perform clustering of the iris dataset based on all variables using Gaussian mixture
models. Use PCA to visualize clusters.
y = X 𝛽 + 𝜀,
where 𝜀 ∈ R 𝑁 are the residuals, or the errors of the prediction. The 𝛽 is found by
minimizing an objective function, which is the loss function, 𝐿(𝛽 ), i.e. the error
measured on the data. This error is the sum of squared errors (SSE) loss.
𝐿 (𝛽 ) = (5.21
SSE(𝛽 ∑︁) )
�− x
𝑇 2
= 𝑁 (� � (5.22
𝛽) � )
𝑇
= (y − X 𝛽 ) (y − (5.23
X𝛽 ) )
= ‖y − X 𝛽 22‖
, (5.24
)
Minimizing the SSE is the Ordinary Least Square OLS regression as objective function.
which is a simple ordinary least squares (OLS) minimization whose analytic solution is:
𝛽 O LS = ( 𝑋 𝑇
𝑋 )− 1 𝑋 𝑇
�
Scikit learn offer many models for supervised learning, and they all follow the same
application programming interface (API), namely:
model =Estimator()
model.fit(X, y)
predictions =
model.predict(X)
%matplotlib inline
import warnings
warnings.filterwarnings(action='once')
l r =lm.LinearRegression().fit(X, y)
y_pred = lr.predict(X)
print("R-squared =",
metrics.r2_score(y, y_pred))
# Plot
fig =plt.figure()
ax =fig.add_subplot(111,
projection='3d')
XX =np.column_stack([xx1.ravel(), xx2.ravel()])
yy = lr.predict(XX)
ax.plot_surface(xx1, xx2, yy.reshape(xx1.shape), color='None')
/home/edouard/anaconda3/lib/python3.7/importlib/_bootstrap.py:219: RuntimeWarning: numpy.
ax.set_xlabel('TV')
˓→ufunc size changed, may indicate binary incompatibility. Expected 192 from C header,␣
ax.set_ylabel('Radio')
˓→got 216 from PyObject
_= ax.set_zlabel('Sales')
return f(*args, **kwds)
/
home/edouard/anaconda3
/lib/python3.7/importlib/_
bootstrap.py:219:
RuntimeWarning: numpy.
˓→ufunc size changed, may
indicate binary
incompatibility. Expected
192 from C header,␣
˓→got 216 from PyObject
return f(*args, **kwds)
/
home/edouard/anaconda3
/lib/python3.7/importlib/_
bootstrap.py:219:
RuntimeWarning: numpy.
˓→ufunc size changed, may
172 Chapter 5. Machine Learning
indicate binary
incompatibility. Expected
192 from C header,␣
Statistics and Machine Learning in Python, Release 0.3
beta
5.3.3 Overhtting
In statistics and machine learning, overfitting occurs when a statistical model describes
random errors or noise instead of the underlying relationships. Overfitting generally
occurs when a model is excessively complex, such as having too many parameters relative
to the number of observations. A model that has been overfit will generally have poor
predictive performance, as it can exaggerate minor fluctuations in the data.
A learning algorithm is trained using some set of training samples. If the learning
algorithm has the capacity to overfit the training samples the performance on the training
sample set will improve while the performance on unseen test sample set will decline.
The overfitting phenomenon has three main explanations: - excessively complex models, -
mul- ticollinearity, and - high dimensionality.
Model complexity
Complex learners with too many parameters relative to the number of observations may
overfit the training dataset.
Multicollinearity
Predictors are highly correlated, meaning that one can be linearly predicted from the
others. In this situation the coefficient estimates of the multiple regression may change
erratically in response to small changes in the model or the data. Multicollinearity does
not reduce the predictive power or reliability of the model as a whole, at least not within
the sample data set; it only affects computations regarding individual predictors. That is,
a multiple regression model with correlated predictors can indicate how well the entire
bundle of predictors predicts the outcome variable, but it may not give valid results about
any individual predictor, or about which predictors are redundant with respect to others.
In case of perfect multicollinearity the predictor matrix is singular and therefore cannot
be inverted. Under these circumstances, for a general linear model y = X 𝛽 + 𝜀, the
ordinary least-squares estimator, 𝛽 𝑂𝐿𝑆 = (X𝑇 X) − 1 X 𝑇 y, does not exist.
An example where correlated predictor may produce an unstable model follows:
import numpy as np
from mpl_toolkits.mplot3d import
Axes3D
import matplotlib.pyplot as plt
# business volume
bv =np.array([10, 20, 30, 40, 50]) # Tax
bp
tax==
.1.2* *bvbv+np.array([-.1, .2, .1, -.2, .1]) # business potential
X =np.column_stack([bv, tax])
beta_star =np.array([.1, 0]) # true solution
'''
Since tax and bv are correlated, there is an infinite number of linear combinations
leading to the same prediction.
'''
High dimensionality
High dimensions means a large number of input features. Linear predictor associate one
pa- rameter to each input feature, so a high-dimensional situation (𝑃 , number of features,
is large) with a relatively small number of samples 𝑁 (so-called large 𝑃small 𝑁 situation)
generally lead to an overfit of the training data. Thus it is generally a bad idea to add
many input features into the learner. This phenomenon is called the curse of
dimensionality.
One of the most important criteria to use when choosing a learning algorithm is based on
the relative size of 𝑃and 𝑁 .
• Remenber that the “covariance” matrix X𝑇 X used in the linear model is a 𝑃 × 𝑃
matrix of rank min(𝑁 , 𝑃). Thus if 𝑃> 𝑁 the equation system is overparameterized
5.3. Linear methods for regression 175
and admit an
Statistics and Machine Learning in Python, Release 0.3 beta
infinity of solutions that might be specific to the learning dataset. See also ill-
conditioned or singular matrices.
• The sampling density of 𝑁 samples in an 𝑃 -dimensional space is proportional to 𝑁
1/𝑃 . Thus a high-dimensional space becomes very sparse, leading to poor
estimations of sam- ples densities.
• Another consequence of the sparse sampling in high dimensions is that all sample
points are close to an edge of the sample. Consider 𝑁 data points uniformly
distributed in a
𝑃 -dimensional unit ball centered at the origin. Suppose we consider a nearest-
neighbor estimate at the origin. The median distance from the origin to the closest
data point is given by the expression(︂ )︂ 1/ 𝑃
1𝑁
𝑑(𝑃, 𝑁 ) = 1 −
2
A more complicated expression exists . for the mean distance to the closest point. For N =
500, P = 10 , 𝑑(𝑃, 𝑁 ) ≈ 0.52, more than halfway to the boundary. Hence most data
points are closer to the boundary of the sample space than to any other data point. The
reason that this presents a problem is that prediction is much more difficult near the
edges of the training sample. One must extrapolate from neighboring sample points
rather than interpolate between them. (Source: T Hastie, R Tibshirani, J Friedman. The
Elements of Statistical Learning: Data Mining, Inference, and Prediction. Second Edition, 2009.)
• Structural risk minimization provides a theoretical background of this phenomenon.
(See VC dimension.)
• See also bias–variance trade-off.
def fit_on_increasing_size(model):
n_samples =100
n_features_ =np.arange(10, 800, 20)
r2_train, r2_test, snr =[ ] , [ ] , [ ]
for n_features in n_features_:
# Sample the dataset (* 2 nb of
samples)
n_features_info =int(n_features/10)
np.random.seed(42) # Make reproducible
X =np.random.randn(n_samples * 2,
n_features)
beta =np.zeros(n_features)
beta[:n_features_info] =1
Xbeta =np.dot(X, beta)
eps =np.random.randn(n_samples
* 2)
y =Xbeta +eps
# Split the dataset into train and test sample Xtrain,
Xtest =X[:n_samples, : ] , X[n_samples:, : ] ytrain,
ytest =y[:n_samples], y[n_samples:]
# fit/predict
l r =model.fit(Xtrain, ytrain)
y_pred_train =lr.predict(Xtrain)
y_pred_test =lr.predict(Xtest)
snr.append(Xbeta.std() / eps.std())
r2_train.append(metrics.r2_score(ytrai
n, y_pred_train))
(continues on next
r2_test.append(metrics.r2_score(ytest, page)
y_pred_test))
return
176 n_features_, np.array(r2_train), Chapter 5. Machine Learning
np.array(r2_test), np.array(snr)
Statistics and Machine Learning in Python, Release 0.3
beta
argmax =n_features[np.argmax(r2_test)]
# plot
f i g, axis =plt.subplots(1, 2,
figsize=(9, 3))
Exercises
Overfitting generally leads to excessively complex weight vectors, accounting for noise or
spu- rious correlations within predictors. To avoid this phenomenon the learning should
constrain the solution in order to fit a global pattern. This constraint will reduce (bias)
the capacity of the learning algorithm. Adding such a penalty will force the coefficients to
be small, i.e. to shrink them toward zeros.
Therefore the loss function 𝐿 (𝛽 ) (generally the SSE) is combined with a penalty
function
Ω(𝛽 ) leading to the general form:
Penalized(𝛽) = 𝐿 (𝛽 ) + 𝜆Ω(𝛽)
The respective contribution of the loss and the penalty is controlled by the regularization
parameter 𝜆.
Ridge regression impose a ℓ2 penalty on the coefficients, i.e. it penalizes with the
Euclidean norm of the coefficients while minimizing SSE. The objective function
becomes:
∑︁
𝑁
�− x
𝑇 2 2
Ridg ( 𝛽 ) = (� �𝛽 ) + 2 (5.25
e 𝜆‖𝛽‖ � )
= ‖y − X 𝛽22 ‖ + 2
2 (5.26
𝜆‖𝛽‖ . )
The 𝛽 that minimises 𝐹𝑅�𝑑𝑔𝑒(𝛽) can be found by the following
derivation:
∇𝛽 Ridge(𝛽) = (5.27
(︀ 0 )︀ )
︀( 𝑇 ∇𝛽 (y𝑇 − 𝑇X𝛽 ) (y − X 𝛽 ) + 𝜆𝛽𝑇 ︀𝛽 =
𝑇
) (5.28
∇𝛽 (y y 0
− 2𝛽 X y + 𝛽 𝑇 𝑇
X X 𝛽 + 𝑇𝜆𝛽 𝛽 )
= 0 𝑇 𝑇 )
−2X y + 2X X 𝛽 + 2𝜆𝛽
= 0 (5.29
𝑇 𝑇 )
− X y + (X X + 𝜆I)𝛽 = 0
(5.30
(X𝑇 X + 𝜆I)𝛽 =
)
X𝑇 y
(5.31
𝛽 = of X𝑇 X before inversion.) This
• The solution adds a positive constant to the diagonal
makes the problem nonsingular, even if X𝑇 X is(X not
𝑇
of full rank, and was the main
motivation behind ridge regression. X (5.32
+ )
• Increasing 𝜆shrinks the 𝛽 coefficients toward 0.
𝜆I) (5.33
• This approach penalizes the objective function by−1the
X𝑇 Euclidian (:math:‘ell_2‘))norm
of the coefficients such that solutions with largeycoefficients become unattractive.
The gradient of the loss:
𝐿(𝛽, 𝑋 , ∑︁
𝜕 = 2( � � (�� ·𝛽 −
�)
� ) +� 𝜆𝛽)�
𝜕𝛽
The ridge penalty shrinks the coefficients toward zero. The figure illustrates: the OLS
solution on the left. The ℓ1 and ℓ2 penalties in the middle pane. The penalized OLS in the
right pane. The right pane shows how the penalties shrink the coefficients toward zero.
The black points are the minimum found in each case, and the white points represents the
true solution used to generate the data.
import matplotlib.pyplot as
plt import numpy as np
import sklearn.linear_model
as lm
# lambda is alpha!
mod =lm.Ridge(alpha=10)
# Fi t models on dataset
n_features, r2_train, r2_test,
snr =
fit_on_increasing_size(model=mo
d)
argmax =n_features[np.argmax(r2_test)]
(continues on next
# plot page)
fi g, axis =plt.subplots(1, 2,
figsize=(9,
5.3. 3))
Linear methods for regression 179
Statistics and Machine Learning in Python, Release 0.3 beta
Exercice
Lasso regression penalizes the coefficients by the ℓ1 norm. This constraint will reduce
(bias) the capacity of the learning algorithm. To add such a penalty forces the
coefficients to be small,
i.e. it shrinks them toward zero. The objective function to minimize becomes:
Lasso(𝛽) = ‖y − X22𝛽 ‖ + 1 (5.34
𝜆‖𝛽‖ . )
This penalty forces some coefficients to be exactly zero, providing a feature selection
property.
import matplotlib.pyplot as
plt import numpy as np
import sklearn.linear_model
as lm
# lambda is alpha !
mod = lm.Lasso(alpha=.1)
# Fi t models on dataset
n_features, r2_train, r2_test,
snr =
fit_on_increasing_size(model=mo
d)
argmax =n_features[np.argmax(r2_test)]
# plot
fi g, axis =plt.subplots(1, 2,
figsize=(9, 3))
Occam’s razor
Occam’s razor (also written as Ockham’s razor, and lex parsimoniae in Latin, which means
law of parsimony) is a problem solving principle attributed to William of Ockham (1287-
1347), who was an English Franciscan friar and scholastic philosopher and theologian.
The principle can be interpreted as stating that among competing hypotheses, the one
with the fewest assumptions should be selected.
Principle of parsimony
The penalty based on the ℓ1 norm promotes sparsity (scattered, or not dense): it forces
many coefficients to be exactly zero. This also makes the coefficient vector scattered.
The figure bellow illustrates the OLS loss under a constraint acting on the ℓ1 norm of the
coef- ficient vector. I.e., it illustrates the following optimization problem:
minimize ‖y − 2
2
X𝛽 ‖
𝛽
subject to ‖𝛽 ‖ 1 ≤
1.
Optimization issues
Section to be completed
• No more closed-form solution.
• Convex but not differentiable.
• Requires specific optimization algorithms, such as the fast iterative shrinkage-
thresholding algorithm (FISTA): Amir Beck and Marc Teboulle, A Fast Iterative
Shrinkage-Thresholding Algorithm for Linear Inverse Problems SIAM J. Imaging Sci.,
2009.
The Elastic-net estimator combines the ℓ1 and ℓ2 penalties, and results in the problem to
Fig. 3: Sparsity of L1
norm
︀( ︀)
Enet(𝛽 ) = ‖y −𝑇 X 22 𝛽 ‖ + 𝛼 𝜌1‖𝛽 ‖ + (1 −22 𝜌) (5.35
‖𝛽 ‖ , )
where 𝛼acts as a global penalty and 𝜌as an ℓ 1 /ℓ 2 ratio.
Rational
• If there are groups of highly correlated variables, Lasso tends to arbitrarily select
only one from each group. These models are difficult to interpret because covariates
that are strongly associated with the outcome are not included in the predictive
model. Conversely, the elastic net encourages a grouping effect, where strongly
correlated predictors tend to be in or out of the model together.
• Studies on real world data and simulation studies show that the elastic net often
outper- forms the lasso, while enjoying a similar sparsity of representation.
import matplotlib.pyplot as
plt import numpy as np
import sklearn.linear_model
as lm
# Fi t models on dataset
n_features, r2_train, r2_test, snr = (continues on next
fit_on_increasing_size(model=mod) page)
# plot
fi g, axis =plt.subplots(1, 2,
figsize=(9, 3))
rotation of n iput sample into a good discriminative one-dimensional sub-space, that best
discriminate sample of current class vs sample of other classes.
This score (a.k.a decision function) is tranformed, using the nonlinear activation funtion 𝑓
(.), to a “posterior probabilities” of class 1: �(� = 1 | � ) = 𝑓(� 𝑇 � ) , where, �(�
= 1 | � ) = 1 − �(� = 0|�).
The decision surfaces (orthogonal hyperplan to � ) correspond to 𝑓(�) = constant, so
that
�𝑇 � = constant and hence the decision surfaces are linear functions of � , even if
the function
𝑓(.) is nonlinear.
A thresholding of the activation (shifted by the bias or intercept) provides the predicted
class label.
184 Chapter 5. Machine Learning
Statistics and Machine Learning in Python, Release 0.3
beta
The vector of parameters, that defines the discriminative axis, minimizes an objective
function ∑︁
min 𝐽 = 𝐿(��, 𝑓( �𝑇 � ) ) +
𝐽( � ) that is a sum of of loss
� function 𝐿 ( � ) and some penalties on the weights
Ω(�), � �
vector Ω(�).
This geometric method does not make any probabilistic assumptions, instead it relies on
dis- tances. It looks for the linear projection of the data points onto a vector, � , that
maximizes the between/within variance ratio, denoted 𝐹 ( � ) . Under a few assumptions,
it will provide the same results as linear discriminant analysis (LDA), explained below.
Suppose two classes of observations, 𝐶0 and 𝐶1, have means 𝜇 0 and 𝜇 1 and the same total
within-class scatter (“covariance”) matrix,
∑︁ ∑︁
� − 0𝜇 �
) ( �0 − ( 1� −
𝑇 𝑇
𝑆� = (� 𝜇 ) +� � 1 (5.36
�∈𝐶−
𝜇� ) ( � 0 𝜇 ) )
�∈𝐶1
(5.37
= 𝑋𝑐 𝑋 𝑐 ,
𝑇
)
where 𝑋 𝑐 is the (𝑁 × 𝑃) matrix of data centered on their respective
means: [︂ ︂]
𝑋0 −
𝑋𝑐 ,
𝜇 01
𝑋
=
where 𝑋 0 and 𝑋 1 are the (𝑁 0 × 𝑃) and (𝑁 − 𝜇× 1 𝑃) matrices of samples of classes 𝐶
1 0
𝜎2 (5.38
𝐹 F i s h e r ( � ) =2
between 𝜎 withi )
n 𝑇
(� 𝜇1 −
= (5.39
�� 𝑇 𝑇 𝜇 )𝑇2
𝑐 0𝑋 )
( �𝑋𝑇𝑐 �( 𝜇 1
= (5.40
−� 𝜇0𝑇))𝑐2 𝑋𝑇 )
�𝑋𝑇𝑐 � (𝜇 1 − 𝜇 0 )(𝜇 1
= (5.41
− 𝜇 0 ) 𝑇�
� 𝑇 𝑐 𝑋𝑇 )
� 𝑇 𝑋𝑐 �
= . (5.42
𝑆 𝐵𝑇 �
� )
𝑆 𝑊 �
The Fisher most discriminant
projection
In the two-class case, the maximum separation occurs by a projection on the ( 𝜇 1 − 𝜇 0 )
the Mahalanobis metric 𝑆 𝑊 −1 , so that
using
� ∝𝑆 𝑊
−1
(𝜇 1
− 𝜇0).
5.4. Linear classihcation 185
Statistics and Machine Learning in Python, Release 0.3 beta
Demonstration
(︂ 𝑇 )︂
∇ � 𝐹� (�) = 0
𝐵r�
Fi s h e
∇� � 𝑇 � =
� 𝑆 �
0
�
(�𝑇 𝑆 𝑊 �)(2𝑆 𝐵 � ) − (�𝑇 𝑆 𝐵 �)(2𝑆 𝑊
�) = 0 �
(�𝑇 𝑆 𝑊 � ) ( 𝑆�𝐵 𝑇 � ) =
(�𝑇 𝑆 𝐵 𝑆�)(𝑆
𝐵 �𝑊 𝑆�
=𝐵) ��𝑇
� 𝑆
𝑆 𝐵 � = 𝜆�(𝑆 𝑊 � )
( 𝑆 𝑊 �)
�1
−
𝑆 𝑊 𝑆𝐵 � = 𝜆 �.
Since we do not care about the magnitude of � , only its direction, we replaced the scalar
factor
( � 𝑇 𝑆 𝐵 � ) / ( � 𝑇 𝑆 𝑊 � ) by 𝜆.
In the multiple-class case, the solutions � are determined by the eigenvectors of 𝑆 𝑊
−1
𝑆 𝐵 that correspond to the 𝐾 − 1 largest eigenvalues.
However, in the two-class case (in which 𝑆 𝐵 = ( 𝜇 1 − 𝜇 0 ) ( 𝜇 1 − 𝜇 0 ) 𝑇 ) it is easy to
show that
� = 𝑆 𝑊 − 1 ( 𝜇 1 − 𝜇 0 ) is the unique eigenvector of 𝑆 𝑊 − 1 𝑆 𝐵 :
𝑆 𝑊
−1
(𝜇 1 − 𝜇 0 ) ( 𝜇 1 − 𝜇 0 ) 𝑇 � = 𝜆 �
𝑆 𝑊
−1
( 𝜇 1 − 𝜇 0 ) ( 𝜇 1 − 𝜇 0 )𝑇 𝑆 𝑊
−1
(𝜇 1 − 𝜇 0 ) = 𝜆 𝑆 𝑊 −1
(𝜇 1 − 𝜇 0 ),
where here 𝜆= ( 𝜇 1 − 𝜇 0 ) 𝑇 𝑆 𝑊
−1
(𝜇 1 − 𝜇 0 ). Which leads to the result
� ∝𝑆 𝑊
−1
( 𝜇 1 − 𝜇 0 ).
warnings.filterwarnings('ignore
')
%matplotlib inline
Exercise
How many
import numpparameters
y as np are required to estimate to perform a LDA ?
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA
# Dataset
n_samples, n_features =100, 2
mean0, mean1 =np.array([0, 0]), np.array([0, 2])
Cov =np.array([[1, .8],[ .8, 1]])
np.random.seed(42)
X0 =
np.random.multivariate_normal(mea
n0, Cov, n_samples)
X1 =np.random.multivariate_normal(mean1, Cov, n_samples) X
=np.vstack([X0, X1])
y =np.array([0] * X0.shape[0] +[1] * X1.shape[0])
(continues on next
# LDA with scikit-learn
page)
errors =y_pred_lda != y
print("Nb errors=%i, error rate=
%.2f" % (errors.sum(),
errors.sum() / len(y_pred_lda)))
Nb errors=10, error rate=0.05
Logistic regression is called a generalized linear models. ie.: it is a linear model with a
link function that maps the output of linear multiple regression to the posterior
probability of class 1 �(1|�) using the logistic sigmoid function:
1
�(1|�,
�
1 + e x p ( − ��
� )=
·� )
def logistic(x): return 1 / (1 +np.exp(-x))
def logistic_loss(x): return np.log(1 +np.exp(-x))
x =np.linspace(-6, 6, 100)
plt.subplot(121)
plt.plot(x, logistic(x))
plt.grid(True)
plt.title('Logistic
(sigmoid)')
x =np.linspace(-3, 3, 100)
plt.subplot(122)
plt.plot(x, logistic_loss(x), label='Logistic loss')
plt.plot(x, np.maximum(0, 1 - x), label='Hinge loss')
plt.legend()
plt.title('Losses')
plt.grid(True)
𝐿 ( � , 𝑋 , � ) = − log (5.43
ℒ ( � , 𝑋, � ) �� )
(1−��)
= ∑︁− log Π� {�(1|�, � � ) * (5.44
{��log �(1|�,
=(1 − �(1|�, � � )� �} ) +� (1 − � ) log(1 }
)
− �(1|�,
� � � ))
∑︁
= {��� � · � ) − log(1 + (5.45
(5.46
exp �( �� · � ))} ))
(5.47
)
This is solved by numerical method using the gradient of the
loss: ∑︁
𝐿 (�,
𝜕 = � � (��− | � ,
𝑋 , �)
�(1� � � ))
𝜕 �
�����
𝐿𝑔 ��������
𝑔𝑒
𝑐𝑒 𝑎* *𝑑��
𝑐����� 𝑎��� 𝑒𝑑�* *���
𝑒�� 𝑒𝑐��
𝑓𝑐��
𝑒�����
on the posterior probability of each class �(𝐶� |�). It only requires to estimate the 𝑃
weight of the � vector. Thus it should be favoured over LDA with many input features. In
small dimension and balanced situations it would provide similar predictions than LDA.
However imbalanced group sizes cannot be explicitly controlled. It can be managed using a
reweighting of the input samples.
logreg.fit(X, y)
y_pred_logreg = logreg.predict(X)
errors =y_pred_logreg != y
print("Nb errors=%i, error rate=%.2f" % (errors.sum(), errors.sum() / len(y_pred_logreg)))
print(logreg.coef_)
Exercise
Explore the Logistic Regression parameters and proposes a solution in cases of highly
im- balanced training dataset 𝑁 1 ≫ 𝑁 0 when we know that in reality both classes have the
same probability �(𝐶1) = �(𝐶0).
5.4.4 Overhtting
When the matrix 𝑆 𝑊 is not full rank or 𝑃≫ 𝑁 , the The Fisher most discriminant
projection estimate of the is not unique. This can be solved using a biased version of
𝑆 𝑊 :
𝑆 𝑊 = 𝑆
𝑒𝑔𝑅�
𝑑
𝑊 + 𝜆𝐼
where 𝐼 is the 𝑃× 𝑃identity matrix. This leads to the regularized (ridge) estimator of the
Fisher’s linear discriminant analysis:
�𝑅�𝑑𝑔𝑒 ∝ ( 𝑆 𝑊 + 𝜆 𝐼 ) − 1 (𝜇 1 − 𝜇 0 )
Increasing 𝜆will:
• Shrinks the coefficients toward zero.
• The covariance will converge toward the diagonal matrix, reducing the contribution of
the pairwise covariances.
The objective function to be minimized is now the combination of the logistic loss
(negative log likelyhood) − log ℒ ( � ) with a penalty of the L2 norm of the weights
vector. In the two-class case, using the 0/1 coding we obtain:
# Dataset
# Build a classification task using 3 informative features
from sklearn import datasets
X, y = datasets.make_classification(n_samples=100,
n_features=20,
n_informative=3
,
n_redundant=0,
n_repeated=0,
n_classes=2,
random_state=0,
shuffle=False)
import numpy as np
import matplotlib.pyplot as plt
(continues on next
page)
l r . f i t ( X , y)
y_pred_lr =lr.predict(X)
errors =y_pred_lr != y
print("Nb errors=%i, error rate=%.2f" % (errors.sum(), errors.sum() / len(y)))
print(lr.coef_)
(1, 20)
Diff 0.0
Nb errors=26, error rate=0.26
[[- 0.7579822 -0.01228473 -0.11412421 0.25491221 0.4329847
0.12899737
0.14564739 0.16763962 0.85071394 0.02116803 -0.1611039 -0.0146019
-0.03399884 0.43127728 -0.05831644 -0.0812323 0.15877844 0.29387389
0.54659524 0.03376169]]
The objective function to be minimized is now the combination of the logistic loss − log
ℒ(�) with a penalty of the L1 norm of the weights vector. In the two-class case, using
the 0/1 coding we obtain:
l r l 1 . f i t ( X , y)
y_pred_lrl1 = lrl1.predict(X)
errors =y_pred_lrl1 != y
print("Nb errors=%i, error rate=%.2f" % (errors.sum(), errors.sum() / len(y_pred_lrl1)))
print(lrl1.coef_)
Support Vector Machine seek for separating hyperplane with maximum margin to enforce
ro- bustness against noise. Like logistic regression it is a discriminative method that
only focuses of predictions.
Here we present the non separable case of Maximum Margin Classifiers with ±1 coding
(ie.:
�� {−1, +1}). In the next figure the legend aply to samples of “dot” class.
Here we introduced the slack variables: 𝜉�, with 𝜉� = 0 for points that are on or inside
the correct margin boundary and 𝜉� = |�� − (�𝑑� 𝑐 �· � � )| for other points. Thus:
1. If ��(�· � � ) ≥ 1 then the point lies outside the margin but on the correct side of
h
te decision boundary. In this case 𝜉� = 0. The constraint is thus not active for this
point. It does not contribute to the prediction.
2. If 1 > ��(� · � � ) ≥ 0 then the point lies inside the margin and on the correct
side of hte decision boundary. In this case 0 < 𝜉� ≤ 1. The constraint is active for
this point. It does contribute to the prediction as a support vector.
3. If 0 < ��(� · � � )) then the point is on the wrong side of the decision boundary
(missclassi- fication). In this case 0 < 𝜉� > 1. The constraint is active for this point.
It does contribute to the prediction as a support vector.
This loss is called the hinge loss, defined as:
max(0, 1 − �� (� · � � ))
So linear SVM is closed to Ridge logistic regression, using the hinge loss instead of the
logistic loss. Both will provide very similar predictions.
svmlin = svm.LinearSVC()
# Remark: by default LinearSVC uses squared_hinge as loss
svmlin.fit(X, y)
y_pred_svmlin = svmlin.predict(X)
errors =y_pred_svmlin != y
print("Nb errors=%i, error rate=%.2f" % (errors.sum(), errors.sum() / len(y_pred_svmlin)))
print(svmlin.coef_)
Linear SVM for classification (also called SVM-C or SVC) with l1-
regularization ∑︀ 𝑁
min 𝐹Lasso linear SVM(�) = 𝜆 ||�||1 � 𝜉�
with ∀� ��(�+ ·𝐶 �� ) ≥
1 − �𝜉
from sklearn import svm
svmlinl1.fit(X, y)
y_pred_svmlinl1 =svmlinl1.predict(X)
errors =y_pred_svmlinl1 != y
print("Nb errors=%i, error rate=%.2f" % (errors.sum(), errors.sum() / len(y_pred_
˓→ svmlinl1)))
print(svmlinl1.coef_)
## Exercise
Compare predictions of Logistic regression (LR) and their SVM counterparts, ie.: L2 LR
vs L2 SVM and L1 LR vs L1 SVM
• Compute the correlation between pairs of weights vectors.
• Compare the predictions of two classifiers using their decision function:
– Give the equation of the decision function for a linear classifier, assuming that
their is no intercept.
The objective function to be minimized is now the combination of the logistic loss log
𝐿 ( � ) or the hinge loss with combination of L1 and L2 penalties. In the two-class case,
using the 0/1 coding we obtain:
︀( 2 ︀)
min Logistic enet(�) = − log ℒ ( � , 𝑋 , � )1 + 𝛼 𝜌 �2 (5.48
‖� ‖ Hinge − 𝜌) ‖ ︀( 2 ︀)
min + (1 enet(�) = Hinge l o s s ( � ) + 𝛼𝜌‖ � ‖ + � ‖
1 2
)
(1 − 𝜌) ‖ ‖ (5.49
)
from sklearn import datasets
from sklearn import linear_model as lm
import matplotlib.pyplot as plt
X, y = datasets.make_classification(n_samples=100,
n_features=20,
n_informative=3
,
n_redundant=0,
n_repeated=0,
n_classes=2,
random_state=0,
shuffle=False)
enetloglike =lm.SGDClassifier(loss="log",
penalty="elasticnet",
alpha=0.0001,
l1_ratio=0.15,
class_weight='
balanced')
enetloglike.fit(X, y)
SGDClassifier(alpha=0.0001, average=False, class_weight='balanced',
enethinge =lm.SGDClassifier(loss="hinge",
early_stopping=False,
penalty="elasticnet", epsilon=0.1, eta0=0.0, fit_intercept=True,
l1_ratio=0.15, alpha=0.0001,
learning_rate='optimal', loss='hinge',
max_iter=1000,l1_ratio=0.15,
n_iter_no_change=5, n_jobs=None,
penalty='elasticnet', power_t=0.5, random_state=None,
class_weight='
shuffle=True, tol=0.001,
balanced') validation_fraction=0.1, verbose=0,
warm_start=False)
enethinge.fit(X, y)
Exercise
source: https://en.wikipedia.org/wiki/Sensitivity_and_specificity
Imagine a study evaluating a new test that screens people for a disease. Each person taking
the test either has or does not have the disease. The test outcome can be positive
(classifying the person as having the disease) or negative (classifying the person as not
having the disease). The test results for each subject may or may not match the subject’s
actual status. In that setting:
• True positive (TP): Sick people correctly identified as sick
• False positive (FP): Healthy people incorrectly identified as sick
• True negative (TN): Healthy people correctly identified as healthy
• False negative (FN): Sick people incorrectly identified as healthy
• Accuracy (ACC):
ACC = (TP + TN) / (TP + FP + FN + TN)
• Sensitivity (SEN) or recall of the positive class or true positive rate (TPR) or hit
rate: SEN = TP / P = TP / (TP+FN)
• Specificity (SPC) or recall of the negative class or true negative
rate: SPC = TN / N = TN / (TN+FP)
• Precision or positive predictive value
(PPV): PPV = TP / (TP + FP)
• Balanced accuracy (bACC):is a useful performance measure is the balanced accuracy
which avoids inflated performance estimates on imbalanced datasets (Brodersen, et
al. (2010). “The balanced accuracy and its posterior distribution”). It is defined as
the arith- metic mean of sensitivity and specificity, or the average accuracy obtained
on either class:
bACC = 1/2 (SEN + SPC)
• F1 Score (or F-score) which is a weighted average of precision and recall are usefull
to deal with imballaced datasets
The four outcomes can be formulated in a 2×2 contingency table or confusion matrix
https:
from sklearn import
//en.wikipedia.org/wiki/Sensitivity_and_specificity
metrics y_pred =[0, 1, 0,
0]
For more precision see: http://scikit-learn.org/stable/modules/model_evaluation.html
y_true = [0, 1, 0, 1]
metrics.accuracy_score(y_tr
ue, y_pred) (continues on next
page)
# Balanced accuracy
b_acc =recalls.mean()
import scipy.stats
acc, N =0.65, 70
pval =scipy.stats.binom_test(x=int(acc * N), n=N, p=0.5) / 2
print(pval)
0.01123144774625465
Some classifier may have found a good discriminative projection �. However if the
threshold to decide the final predicted class is poorly adjusted, the performances will
highlight an high specificity and a low sensitivity or the contrary.
In this case it is recommended to use the AUC of a ROC analysis which basically provide a
mea- sure of overlap of the two classes when points are projected on the discriminative
axis. For more detail on ROC and AUC
see:https://en.wikipedia.org/wiki/Receiver_operating_characteristic.
from sklearn import metrics
score_pred =np.array([.1 ,.2, .3, .4, .5, .6, .7, .8])
y_true = np.array([0, 0, 0, 0, 1, 1, 1, 1])
thres =.9
y_pred =(score_pred >thres).astype(int)
print("Predictions:", y_pred)
metrics.accuracy_score(y_true, y_pred)
(continues on next
page)
Predictions: [0 0 0 0 0 0 0 0]
Recalls: [1. 0.]
AUC: 1.0
5.4.13 Exercise
SVM are based kernel methods require only a user-specified kernel function 𝐾(��, �� ),
i.e., a similarity function over pairs of data points (��, �� ) into kernel (dual) space on
which learning algorithms operate linearly, i.e. every operation on points is a linear
combination of 𝐾(��, �� ).
2. Learning
Outline algorithms
of the SVM operate linearly by dot product into high-kernel space 𝐾 (., ��)
algorithm:
· 𝐾 (., �� ).
1. Map points � into kernel space using a kernel function: � → 𝐾(�, .).
• Using the kernel trick (Mercer’s Theorem) replace dot product in hgh
space by a simpler operation such that 𝐾 (., ��) · 𝐾 (., �� ) = 𝐾(��, �� ).
dimensional
need
Thustowecompute
only a similarity measure for each pairs of point and store in a 𝑁
× 𝑁 matrix.
Gram
• Finally, The learning process consist of estimating the 𝛼� of the decision
function that maximises the hinge loss (of 𝑓 (�)) plus some penalty when
applied on all training points.
(︃ 𝑁 )︃
︁∑
𝑓(�) = sign 𝛼� �� 𝐾(��,
�) .
�
3. Predict a new point � using the decision function.
One of the most commonly used kernel is the Radial Basis Function (RBF) Kernel. For a
pair of points ��, �� the RBF kernel is defined as:
(︂ )︂
‖��− �� 2
𝐾(��, �� ) = (5.50
‖ 2𝜎2
exp − )
︀(
= exp −𝛾 ‖�� − ��
(5.51
︀)
‖
Where 𝜎 (or 𝛾) defines the kernel width
2
)
parameter. Basically, we consider a Gaussian
function centered on each training sample ��. it has a ready interpretation as a similarity
measure as it decreases with squared Euclidean distance between the two feature vectors.
import numpy as np
from sklearn.svm import SVC
from sklearn import datasets
import matplotlib.pyplot as
plt
# dataset
X, y =
datasets.make_classification(n_s
amples=10,
n_features=2,n_redundant=0,
n_classes=2,
random_state=1
,
shuffle=False)
c l f =SVC(kernel='rbf')#, gamma=1)
c l f . f i t ( X , y)
print("#Errors: %i" % np.sum(y != clf.predict(X)))
c l f . decision_function(X)
# Usefull internals:
# Array of support vectors
clf.support_vectors_
/home/edouard/anaconda3/lib/python3.7/importlib/_bootstrap.py:219: RuntimeWarning: numpy.
#˓→ufunc
indicessize changed,vectors
of support may indicate
withinbinary incompatibility.
original X Expected 192 from C header,␣
˓→got 216 from PyObject
np.all(X[clf.support_,:] == clf.support_vectors_)
return f(*args, **kwds)
/
home/edouard/anaconda3
˓→ufunc size changed, mayindicate binary incompatibility. Expected 192(cfornotminuCeshoenadneerx,t ␣page)
/lib/python3.7/importlib/_
˓→got 216 from PyObject
bootstrap.py:219:
202
RuntimeWarning: numpy. Chapter 5. Machine Learning
Statistics and Machine Learning in Python, Release 0.3
beta
3. Random forest
A random forest is a meta estimator that fits a number of decision tree learners on
various sub-samples of the dataset and use averaging to improve the predictive accuracy
and control over-fitting.
Random forest models reduce the risk of overfitting by introducing randomness by:
• building multiple trees (n_estimators)
• drawing observations with replacement (i.e., a bootstrapped sample)
• splitting nodes on the best split among a random subset of the features selected at
every node
#Errors: 0
Extra Trees is like Random Forest, in that it builds multiple trees and splits nodes using
random subsets of features, but with two key differences: it does not bootstrap
observations (meaning it samples without replacement), and nodes are split on random
splits, not best splits. So, in summary, ExtraTrees: builds multiple trees with bootstrap
= False by default, which means it samples without replacement nodes are split based on
random splits among a random subset of the features selected at every node In Extra
Trees, randomness doesn’t come from bootstrapping of data, but rather comes from the
random splits of all observations. ExtraTrees is named for (Extremely Randomized
Trees).
warnings.filterwarnings('ignore
')
%matplotlib inline
1. Training dataset: Dataset used to fit the model (set the model parameters like
weights). The training error can be easily calculated by applying the statistical
learning method to the observations used in its training. But because of overfitting,
the training error rate can dramatically underestimate the error that would be
obtained on new samples.
Cross-Validation scheme randomly divides the set of observations into 𝐾 groups, or folds,
of approximately equal size. The first fold is treated as a validation set, and the method 𝑓
() is fitted on the remaining union of 𝐾 − 1 folds: (𝑓( 𝑋 − 𝐾 , � − 𝐾 )).
The measure of performance (the score function �), either a error measure or an correct
predic- tion measure is an average of a loss error or correct prediction measure, noted ℒ,
between a true target value and the predicted target value. The score function is evaluated
of the on the obser- vations in the held-out fold. For each sample � we consider the model
estimated 𝑓 (𝑋 −�(�) , �−�(�) on the data set without the group � that contains � noted
−�(�). This procedure is repeated 𝐾 times; each time, a different group of observations
is treated as a test set. Then we compare the predicted value (𝑓−�(�)(��) = �ˆ�) with
true value �� using a Error or Loss function ℒ(�, �ˆ).
For 10-fold we can either average over 10 values (Macro measure) or concatenate the 10
ex- periments and compute the micro measures.
Two strategies micro vs macro estimates:
(︁ compute a score �
Micro measure: average(individual scores):
︁∑𝑁 ︁) for each sample and average
over all samples. It is simillar to average score(concatenation):
. an averaged score
�(𝑓 )samples.
computed over all concatenated =1 ℒ ��,𝑓(�−�(�),
𝑁 �
�−�(�))
Macro measure mean(CV scores) (the most commonly used method): compute a score � on
eacheach fold � and average accross folds:
1 ∑︁
𝐾
�(𝑓 ) ��(𝑓 ).
𝐾
= �
1 ∑︁
𝐾
1 ∑︁ (︁ )︁
�(𝑓 ) ℒ .
𝑁 ��,𝑓(�−�(�),
� �
=
𝐾 �∈� �−�(�))
These two measures (an average of average vs. a global average) are generaly similar. They
may differ slightly is folds are of different sizes.
This validation scheme is known as the K-Fold CV. Typical choices of 𝐾 are 5 or 10,
[Kohavi 1995]. The extreme case where 𝐾 = 𝑁 is known as leave-one-out cross-
validation, LOO-CV.
CV for regression
%matplotlib inlinefunction ℒ() is the r-squared score. However other function could be
Usually the error
import warnings
used.
warnings.filterwarnings(action='once')
import numpy as np
from sklearn import datasets
import sklearn.linear_model as
lm import sklearn.metrics as
metrics
from sklearn.model_selection
import KFold
X, y =
datasets.make_regression(n_samples
=100, n_features=100,
n_informative=10, random_state=42)
estimator = lm.Ridge(alpha=10)
cv =
KFold(n_splits=5, random_state=42)
r2_train, r2_test =l i s t ( ) , l i s t ( )
# provide a cv
cv =KFold(n_splits=5, random_state=42)
scores =cross_val_score(estimator=estimator, X=X, y=y, cv=cv)
print("Test r2:%.2f" % scores.mean())
Test r2:0.73
Test r2:0.73
CV for classihcation
With classification problems it is essential to sample folds where each set contains
approxi- mately the same percentage of samples of each target class as the complete set.
This is called stratification. In this case, we will use StratifiedKFold with is a variation
of k-fold which returns stratified folds.
Usually the error function 𝐿() are, at least, the sensitivity and the specificity. However
other function could be used.
import numpy as np
from sklearn import datasets
import sklearn.linear_model as
lm import sklearn.metrics as
metrics
from sklearn.model_selection
import StratifiedKFold
X, y =
datasets.make_classification(n_sam
ples=100, n_features=100,
n_infor
mative=
10,
random_
state=42
)
cv =StratifiedKFold(n_splits=5)
acc_test.append(metrics.accuracy_score(
y[test],
estimator.predict(X[test, : ] ) ) )
==Macro measures ==
Train SPC:1.00; SEN:1.00
Test SPC:0.78; SEN:0.82
Test ACC:0.80, ballanced
ACC:0.80 Folds: [0.9,
0.7, 0.95, 0.7, 0.75]
Test ACC:0.80 Folds:
[0.9, 0.7, 0.95, 0.7,
0.75]
= =Micro measures
Scikit-learn == user-friendly function to perform
provides
Test
CV: SPC:0.78; SEN:0.82
Test ACC:0.80
from sklearn.model_selection import cross_val_score
Note that with Scikit-learn user-friendly function we average the scores’ average obtained
on individual folds which may provide slightly different results that the overall average
presented earlier.
Dataset
import numpy as np
from sklearn import datasets
import sklearn.linear_model as
lm import sklearn.metrics as
metrics
from sklearn.model_selection
import StratifiedKFold
X, y =
datasets.make_classification(n_sam
ples=20, n_features=5,
n_informative=2, random_
Use cross_validate
˓→state=42)
function
cv = StratifiedKFold(n_splits=5)
print(np.mean(test_accs), test_accs)
coefs_cv =np.array(coefs_seq)
print(coefs_cv)
print(coefs_cv. mean(axis=0)
) print("Std Err of the
coef")
print(coefs_cv.std(axis=0) /
0.8 [0.5, 0.5, 1.0, 1.0, 1.0]
np.sqrt(coefs_cv.shape[0]))
[[[-0.87692513 0.6260013 1.18714373 -0.30685978 -0.38037393]]
parallel = Parallel(n_jobs=5)
cv_ret =
parallel( delayed(_split_fi
t_predict)(
clone(estimator), X, y,
train, test)
for train, test in cv.split(X,
y) )
y_test_pred_cv, coefs_cv =
zip(*cv_ret)
test_accs =[metrics.accuracy_score(y[test],
y_test_pred[test]) for train,
0.8 [0.5, 0.5, 1.0, 1.0, 1.0] test in cv.
˓ → split(X, y )]
print(np.mean(test_accs),
Test same predictions andtest_accs)
same
coeficients
assert np.all(y_test_pred == y_test_pred_seq)
assert np.allclose(np.array(coefs_cv).squeeze(), np.array(coefs_seq).squeeze())
2. Evaluate the model on the validation set and keep the parameter(s) that minimises
the error measure
𝛼* = arg min 𝐿(𝑓 (𝑋���
𝑎 �), ��
𝑎, 𝛼�)
�
3. Refit the learner on all training + validation data using the best hyper parameters: 𝑓
* ≡
𝑓(𝑋��� 𝑎 ,����
𝑎 �∪�� 𝑎 , 𝛼 *)
𝑎 �∪��
Most of time, we cannot afford such three-way split. Thus, again we will use CV, but in
this case we need two nested CVs.
One outer CV loop, for model assessment. This CV performs 𝐾 splits of the dataset into
training plus validation (𝑋 − 𝐾 , �−𝐾) set and a test set 𝑋 𝐾 , �𝐾
One inner CV loop, for model selection. For each run of the outer loop, the inner loop loop
performs 𝐿 splits of dataset (𝑋 − 𝐾 , �−𝐾) into training set: (𝑋 −𝐾 ,−𝐿 , �−𝐾,−𝐿) and a validation
set: (𝑋 −𝐾 ,𝐿 , �−𝐾,𝐿).
Note that the inner CV loop combined with the learner form a new learner with an
automatic model (parameter) selection procedure. This new learner can be easily
constructed using Scikit- learn. The learned is wrapped inside a GridSearchCV class.
import numpy as np
Then the newimport
from sklearn learneddatasets
can be plugged into the classical outer CV loop.
import sklearn.linear_model as lm
from sklearn.model_selection import GridSearchCV
import sklearn.metrics as metrics
from sklearn.model_selection import KFold
# Dataset
noise_sd = 10
X, y, coef =
datasets.make
_regression(n_
samples=50,
n_features=1
00,
noise=noise_
sd,
n
_
i
n
f
o
r
m
212 a Chapter 5. Machine Learning
t
i
v
Statistics and Machine Learning in Python, Release 0.3
beta
SNR: 2.6358469446381614
Sklearn will automatically select a grid of parameters, most of time use the defaults
values.
n_jobs is the number of CPUs to use during the cross validation. If -1, use all the
CPUs.
model.fit(X, y)
1) Biased usage: fit on all data, ommit outer CV loop
print("Train r2:%.2f" % metrics.r2_score(y, model.predict(X)))
print(model.best_params_)
Train r2:0.96
{'alpha': 1.0, 'l1_ratio': 0.9}
r2_test.append(metrics.r2_score(y_test, model.predict(X_test)))
r2_train.append(metrics.r2_score(y_train, model.predict(X_train)))
alphas.append(model.best_params_)
print("Train r2:%.2f" %
np.mean(r2_train)) print("Test r2:%.2f" %
np.mean(r2_test)) print("Selected
alphas:", alphas)
Train r2:1.00
Test r2:0.55
Selected
alphas:
[{'alpha':
0.001,
'l1_ratio':
3) User-friendly sklearn for outer
0.9},
CV
{'alpha':
from sklearn.model_selection import cross_val_score
0.001, =cross_val_score(estimator=model, X=X, y=y, cv=cv)
scores
'l1_ratio': r2:%.2f" % scores.mean())
print("Test
0.9}, {
˓→'alpha':
Test r2:0.55
0.001,
'l1_ratio':
0.9},sklearn import datasets
from
{'alpha':
import sklearn.linear_model as
0.01,
lm import sklearn.metrics as
metrics
'l1_ratio':
0.9},sklearn.model_selection
from (continues on next
import cross_val_score
{'alpha': page)
0.001,
5.6. Resampling Methods
˓→ 'l1_ratio':
213
0.9}]
Statistics and Machine Learning in Python, Release 0.3 beta
X, y =
datasets.make_classification(n_sam
ples=100, n_features=100,
n_infor
mative=
10,
random_
state=42
)
5. Random Permutations
The statistic
import is the
numpy as np correlation.
import scipy.stats as stats
import matplotlib.pyplot as
plt import seaborn as sns
%matplotlib inline
#%matplotlib qt
np.random.seed(42)
x =np.random.normal(loc=10,
scale=1, size=100)
y =x +np.random.normal(loc=-3,
scale=3, size=100) # snr =1/2
# Plot
# Re-weight to obtain distribution
weights =np.ones(perms.shape[0]) / perms.shape[0]
plt.hist([perms[perms >=perms[0]], perms], histtype='stepfilled',
bins=100, label=["t>t obs (p-value)", "t<t obs"],
weights=[weights[perms >=perms[0]], weights])
Exercise
Given the logistic regression presented above and its validation given a 5 folds CV.
1. Compute the p-value associated with the prediction accuracy using a permutation
test.
2. Compute the p-value associated with the prediction accuracy using a parametric
test.
6. Bootstrapping
2. For each sample � fit the model and compute the scores.
3. Assess standard errors and confidence intervals of scores using the scores obtained
on the
𝐵 resampled dataset.
import numpy as np
from sklearn import datasets
import sklearn.linear_model as
lm import sklearn.metrics as
metrics import pandas as pd
# Regression dataset
n_features =5
n_features_info = 2
n_samples = 100
X =np.random.randn(n_samples, n_features)
beta =np.zeros(n_features)
beta[:n_features_info] =1
Xbeta =np.dot(X, beta)
eps =np.random.randn(n_samples)
y =Xbeta + eps
# Bootstrap loop
nboot =100 # ! ! Should be at least 1000
scores_names = ["r2"]
scores_boot =np.zeros((nboot,
len(scores_names)))
coefs_boot =np.zeros((nboot, X.shape[1]))
orig_all = np.arange(X.shape[0])
for boot_i in range(nboot):
boot_tr =np.random.choice(orig_all, size=len(orig_all), replace=True)
boot_te =np.setdiff1d(orig_all, boot_tr, assume_unique=False)
Xtr, ytr =X[boot_tr, : ] , y[boot_tr]
Xte, yte =X[boot_te, : ] , y[boot_te]
model.fit(Xtr, ytr)
y_pred = model.predict(Xte).ravel()
scores_boot[boot_i, : ] =metrics.r2_score(yte, y_pred)
coefs_boot[boot_i, : ] =model.coef_
coefs_boot = pd.DataFrame(coefs_boot)
coefs_stat =coefs_boot.describe(percentiles=[.975, .5, .025])
print("Coefficients distribution")
print(coefs_stat)
Coefficients on a l l data:
[ 1.0257263 1.11323 -0.0499828 -0.09263008 0.15267576]
r-squared: Mean=0.61, SE=0.10, CI=(nan nan)
Coefficients distribution
0 1 2 3 4
count 100.00000 100.00000 100.00000 100.00000 100.00000
0 0 0 0 0
mean 1.012534 1.132775 -0.056369 -0.100046 0.164236
std 0.094269 0.104934 0.111308 0.095098 0.095656
min 0.759189 0.836394 -0.290386 -0.318755 -0.092498
2.5% 0.814260 0.948158 -0.228483 -0.268790 -0.044067
50% 1.013097 1.125304 -0.057039 -0.099281 0.164194
97.5% 1.170183 1.320637 0.158680 0.085064 0.331809
max 1.237874 1.340585 0.291111 0.151059 0.450812
These methods are Ensemble learning techniques. These models are machine learning
paradigms where multiple models (often called “weak learners”) are trained to solve the
same problem and combined to get better results. The main hypothesis is that when
weak models are correctly combined we can obtain more accurate and/or robust models.
2. Bootstrapping
5.7.3 Bagging
In parallel methods we fit the different considered learners independently from each
others and, so, it is possible to train them concurrently. The most famous such approach
is “bagging” (standing for “bootstrap aggregating”) that aims at producing an ensemble
model that is more robust than the individual models composing it.
When training a model, no matter if we are dealing with a classification or a regression
problem, we obtain a function that takes an input, returns an output and that is defined
with respect to the training dataset.
The idea of bagging is then simple: we want to fit several independent models and
“average” their predictions in order to obtain a model with a lower variance. However, we
can’t, in practice, fit fully independent models because it would require too much data. So,
we rely on the good “approximate properties” of bootstrap samples (representativity and
independence) to fit models that are almost independent.
First, we create multiple bootstrap samples so that each new bootstrap sample will act as
another (almost) independent dataset drawn from true distribution. Then, we can fit a
weak learner for each of these samples and finally aggregate them such that we kind of
“aver- age” their outputs and, so, obtain an ensemble model with less variance that its
components. Roughly speaking, as the bootstrap samples are approximatively
independent and identically distributed (i.i.d.), so are the learned base models. Then,
“averaging” weak learners outputs do not change the expected answer but reduce its
variance.
So, assuming that we have L bootstrap samples (approximations of L independent
datasets) of size B denoted
and then aggregate them into some kind of averaging process in order to get an ensemble
model with a lower variance. For example, we can define our strong model such that
There are several possible ways to aggregate the multiple models fitted in parallel. - For a
regression problem, the outputs of individual models can literally be averaged to obtain
the output of the ensemble model. - For classification problem the class outputted by each
model can be seen as a vote and the class that receives the majority of the votes is
returned by the ensemble model (this is called hard-voting). Still for a classification
problem, we can also consider the probabilities of each classes returned by all the models,
average these
probabilities and keep the class with the highest average probability (this is called soft-
voting). – > Averages or votes can either be simple or weighted if any relevant weights
can be used.
Finally, we can mention that one of the big advantages of bagging is that it can be
parallelised. As the different models are fitted independently from each others, intensive
parallelisation tech- niques can be used if required.
Bagging consists in fitting several base models on different bootstrap samples and build
an ensemble model that “average” the results of these weak learners.
Question : - Can you name an algorithms based on Bagging technique , Hint : leaf
###### Examples
Here, we are trying some example of stacking
• Bagged Decision Trees for Classification
import pandas
from sklearn import model_selection
from sklearn.ensemble import BaggingClassifier
from sklearn.tree import DecisionTreeClassifier
names =['preg', 'plas', 'pres', 'skin', 'test', 'mass', 'pedi', 'age', 'class'] dataframe =
pandas.read_csv("https://raw.githubusercontent.com/jbrownlee/Datasets/master/
˓→pima-indians-diabetes.data.csv",names=names)
array = dataframe.values
(continues on next
page)
names =['preg', 'plas', 'pres', 'skin', 'test', 'mass', 'pedi', 'age', 'class'] dataframe =
pandas.read_csv("https://raw.githubusercontent.com/jbrownlee/Datasets/master/
˓→pima-indians-diabetes.data.csv",names=names)
array =dataframe.values
x = array[:,0:8]
y =array[:,8]
Both of these algorithms will print, Accuracy: 0.77 (+/- 0.07). They are equivalent.
5.7.4 Boosting
In sequential methods the different combined weak models are no longer fitted indepen-
dently from each others. The idea is to fit models iteratively such that the training of
model at a given step depends on the models fitted at the previous steps. “Boosting” is
the most famous of these approaches and it produces an ensemble model that is in general
less biased than the weak learners that compose it.
Boosting methods work in the same spirit as bagging methods: we build a family of
models
that are aggregated to obtain a strong learner that performs better.
However, unlike bagging that mainly aims at reducing variance, boosting is a technique
that consists in fitting sequentially multiple weak learners in a very adaptative way: each
model in the sequence is fitted giving more importance to observations in the dataset that
were badly handled by the previous models in the sequence. Intuitively, each new model
focus its efforts on the most difficult observations to fit up to now, so that we obtain, at
the
5.7. Ensemble learning: bagging, boosting and stacking 223
Statistics and Machine Learning in Python, Release 0.3 beta
end of the process, a strong learner with lower bias (even if we can notice that boosting
can also have the effect of reducing variance).
– > Boosting, like bagging, can be used for regression as well as for classification
problems.
Being mainly focused at reducing bias, the base models that are often considered for
boosting are models with low variance but high bias. For example, if we want to
usetreesas our base models, we will choosemost of the time shallow decision trees with
only a few depths.
Another important reason that motivates the use of low variance but high bias models as
weak learners for boosting is that these models are in general less computationally
expensive to fit (few degrees of freedom when parametrised). Indeed, as computations to
fit the different mod- els can’t be done in parallel (unlike bagging), it could become too
expensive to fit sequentially several complex models.
Once the weak learners have been chosen, we still need to define how they will be
sequentially fitted and how they will be aggregated. We will discuss these questions in
the two follow- ing subsections, describing more especially two important boosting
algorithms: adaboost and gradient boosting.
In a nutshell, these two meta-algorithms differ on how they create and aggregate the
weak learners during the sequential process. Adaptive boosting updates the weights
attached to each of the training dataset observations whereas gradient boosting updates
the value of these observations. This main difference comes from the way both methods
try to solve the optimisation problem of finding the best model that can be written as a
weighted sum of weak learners.
Boosting consists in, iteratively, fitting a weak learner, aggregate it to the ensemble model
and “update” the training dataset to better take into account the strengths and weakness of
the current ensemble model when fitting the next base model.
1/ Adaptative boosting
In adaptative boosting (often called “adaboost”), we try to define our ensemble model as a
weighted sum of L weak learners
Finding the best ensemble model with this form is a difficult optimisation problem. Then,
instead of trying to solve it in one single shot (finding all the coefficients and weak
learners that give the best overall additive model), we make use of an iterative
optimisation process that is much more tractable, even if it can lead to a sub-optimal
solution. More especially, we add the weak learners one by one, looking at each iteration
for the best possible pair (coefficient, weak learner) to add to the current ensemble model.
In other words, we define recurrently the (s_l)’s such that
where c_l and w_l are chosen such that s_l is the model that fit the best the training data
and, so, that is the best possible improvement over s_(l-1). We can then denote
where E(.) is the fitting error of the given model and e(.,.) is the loss/error function.
Thus, instead of optimising “globally” over all the L models in the sum, we approximate
the optimum by optimising “locally” building and adding the weak learners to the strong
model one by one.
More especially, when considering a binary classification, we can show that the adaboost
algo- rithm can be re-written into a process that proceeds as follow. First, it updates the
observations weights in the dataset and train a new weak learner with a special focus
given to the obser- vations misclassified by the current ensemble model. Second, it adds
the weak learner to the weighted sum according to an update coefficient that expresse the
performances of this weak model: the better a weak learner performs, the more it
contributes to the strong learner.
So, assume that we are facing a binary classification problem, with N observations in our
dataset and we want to use adaboost algorithm with a given family of weak models. At the
very beginning of the algorithm (first model of the sequence), all the observations have
the same weights 1/N. Then, we repeat L times (for the L learners in the sequence) the
following steps:
fit the best possible weak model with the current observations weights
compute the value of the update coefficient that is some kind of scalar evaluation metric of
the weak learner that indicates how much this weak learner should be taken into account
into the ensemble model
update the strong learner by adding the new weak learner multiplied by its update
coefficient compute new observations weights that expresse which observations we would
5.7. Ensemble learning: bagging, boosting and stacking 225
like to focus
Statistics and Machine Learning in Python, Release 0.3 beta
Adaboost updates weights of the observations at each iteration. Weights of well classified
obser- vations decrease relatively to weights of misclassified observations. Models that
perform better have higher weights in the final ensemble model.
2/ Gradient boosting
In gradient boosting, the ensemble model we try to build is also a weighted sum of weak
learners
Just as we mentioned for adaboost, finding the optimal model under this form is too
difficult and an iterative approach is required. The main difference with adaptative
boosting is in the definition of the sequential optimisation process. Indeed, gradient
boosting casts the problem into a gradient descent one: at each iteration we fit a weak
learner to the opposite of the gradient of the current fitting error with respect to the
current ensemble model. Let’s try to clarify this last point. First, theoretical gradient
descent process over the ensemble model can be written
where E(.) is the fitting error of the given model, c_l is a coefficient corresponding to the
step size and
This entity is the opposite of the gradient of the fitting error with respect to the ensemble
model at step l-1. This opposite of the gradient is a function that can, in practice, only be
evaluated for observations in the training dataset (for which we know inputs and
outputs): these evaluations are called pseudo-residuals attached to each observations.
Moreover, even if we know for the observations the values of these pseudo-residuals, we
don’t want to add to our ensemble model any kind of function: we only want to add a new
instance of weak model. So, the natural thing to do is to fit a weak learner to the pseudo-
residuals computed for each observation. Finally, the coefficient c_l is computed following a
one dimensional optimisation process (line-search to obtain the best step size c_l).
So, assume that we want to use gradient boosting technique with a given family of weak
models. At the very beginning of the algorithm (first model of the sequence), the pseudo-
residuals are set equal to the observation values. Then, we repeat L times (for the L
models of the sequence) the following steps:
fit the best possible weak model to pseudo-residuals (approximate the opposite of the
gradient with respect to the current strong learner)
compute the value of the optimal step size that defines by how much we update the
ensemble model in the direction of the new weak learner
update the ensemble model by adding the new weak learner multiplied by the step size
(make a step of gradient descent)
compute new pseudo-residuals that indicate, for each observation, in which direction we
would like to update next the ensemble model predictions
Repeating these steps, we have then build sequentially our L models and aggregate them
fol- lowing a gradient descent approach. Notice that, while adaptative boosting tries to
solve at each iteration exactly the “local” optimisation problem (find the best weak learner
and its coefficient to add to the strong model), gradient boosting uses instead a gradient
descent approach and can more easily be adapted to large number of loss functions. Thus,
gradi- ent boosting can be considered as a generalization of adaboost to arbitrary
differentiable loss functions.
Note There is an algorithm which gained huge popularity after a Kaggle’s competitions. It
is XGBoost (Extreme Gradient Boosting). This is a gradient boosting algorithm which
has more flexibility (varying number of terminal nodes and left weights) parameters to
avoid sub- learners correlations. Having these important qualities, XGBOOST is one of
the most used algorithm in data science. LIGHTGBM is a recent implementation of this
algorithm. It was
published by Microsoft and it gives us the same scores (if parameters are equivalents) but
it runs quicker than a classic XGBOOST.
Gradient boosting updates values of the observations at each iteration. Weak learners are
trained to fit the pseudo-residuals that indicate in which direction to correct the current
en- semble model predictions to lower the error.
Examples
Here, we are trying an example of Boosting and compare it to a Bagging. Both of
algorithms take the same weak learners to build the macro-model
• Adaboost Classifier
breast_cancer = load_breast_cancer()
x =pd.DataFrame(breast_cancer.data,
columns=breast_cancer.feature_names)
y =pd.Categorical.from_codes(breast_cancer.target, breast_cancer.target_names)
# Transforming string Target to an int
encoder =LabelEncoder()
binary_encoded_y = pd.Series(encoder.fit_transform(y))
5. Overview of stacking
Stacking mainly differ from bagging and boosting on two points : - First stacking often
con- siders heterogeneous weak learners (different learning algorithms are combined)
whereas bagging and boosting consider mainly homogeneous weak learners. - Second,
stacking learns to combine the base models using a meta-model whereas bagging and
boosting combine weak learners following deterministic algorithms.
As we already mentioned, the idea of stacking is to learn several different weak learners
and combine them by training a meta-model to output predictions based on the multiple
predic- tions returned by these weak models. So, we need to define two things in order to
build our stacking model: the L learners we want to fit and the meta-model that combines
them.
For example, for a classification problem, we can choose as weak learners a KNN classifier,
a logistic regression and a SVM, and decide to learn a neural network as meta-model.
Then, the neural network will take as inputs the outputs of our three weak learners and
will learn to return final predictions based on it.
So, assume that we want to fit a stacking ensemble composed of L weak learners. Then we
have to follow the steps thereafter:
• split the training data in two folds
• choose L weak learners and fit them to data of the first fold
• for each of the L weak learners, make predictions for observations in the second
fold
• fit the meta-model on the second fold, using predictions made by the weak learners
as inputs
In the previous steps, we split the dataset in two folds because predictions on data that
have been used for the training of the weak learners are not relevant for the training of
the meta- model.
Multi-level stacking considers several layers of stacking: some meta-models are trained
on out- puts returned by lower layer meta-models and so on. Here we have represented a
3-layers stacking model.
Examples
Here, we are trying an example of Stacking and compare it to a Bagging & a Boosting. We
note that, many other applications (datasets) would show more difference between these
techniques.
from sklearn.ensemble import
AdaBoostClassifier from sklearn.tree import
DecisionTreeClassifier from sklearn.datasets
import load_breast_cancer import pandas as pd
import numpy as np
from sklearn.model_selection import
train_test_split
from sklearn.metrics import confusion_matrix
from sklearn.preprocessing import
LabelEncoder from sklearn.metrics import
accuracy_score from sklearn.metrics import
f1_score
from sklearn.ensemble import
RandomForestClassifier
from sklearn.linear_model import
LogisticRegression
breast_cancer = load_breast_cancer()
x =pd.DataFrame(breast_cancer.data,
columns=breast_cancer.feature_names)
y=
pd.Categorical.from_codes(breast_cancer.target
, breast_cancer.target_names)
˓→ state=2020 (continues on next
# Transforming string Target to an int
) page)
encoder = LabelEncoder()
binary_encoded_y =
5.7. Ensemble learning: bagging, boosting and stacking 231
pd.Series(encoder.fit_transform(y))
clf_logistic_reg =LogisticRegression(solver='liblinear',random_state=2020)
'''
This is a test class for stacking !
Please f i l l Free to change i t to f i t your
needs
We suppose that at least the First N-1 Classifiers have a
predict_proba function.
'''
def fit(self,data_x,data_y):
stacked_data_x = data_x.copy()
for classfier in self._classifiers[:-1]:
classfier.fit(data_x,data_y)
stacked_data_x =
np.column_stack((stacked_data_x,cla
ssfier.predict_proba(data_
˓→ x)))
last_classifier =self._classifiers[-1]
last_classifier. fit(stacked_data_x,data_y)
def predict(self,data_x):
last_classifier =self._classifiers[-1]
return last_classifier.predict(stacked_data_x)
bagging_clf_rf.fit(train_x, train_y)
boosting_clf_ada_boost.fit(train_x, train_y)
classifers_list =[clf_rf,clf_ada_boost,clf_logistic_reg]
clf_stacking =Stacking(classifers_list)
clf_stacking.fit(train_x,train_y)
predictions_bagging =bagging_clf_rf.predict(test_x)
predictions_boosting =boosting_clf_ada_boost.predict(test_x)
predictions_stacking =clf_stacking.predict(test_x)
8. Gradient descent
1. Introduction
Consider the 3-dimensional graph below in the context of a cost function. Our goal is to
move from the mountain in the top right corner (high cost) to the dark blue sea in the
5.8. Gradient descent
bottom 233
Statistics and Machine Learning in Python, Release 0.3 beta
left (low cost). The arrows represent the direction of steepest descent (negative gradient)
from any given point–the direction that decreases the cost function as quickly as possible
Starting at the top of the mountain, we take our first step downhill in the direction
specified by the negative gradient. Next we recalculate the negative gradient (passing in
the coordinates of our new point) and take another step in the direction it specifies. We
continue this process iteratively until we get to the bottom of our graph, or to a point
where we can no longer move downhill–a local minimum.
Learning rate
The size of these steps is called the learning rate. With a high learning rate we can cover
more ground each step, but we risk overshooting the lowest point since the slope of the hill
is constantly changing. With a very low learning rate, we can confidently move in the
direction of the negative gradient since we are recalculating it so frequently. A low
learning rate is more precise, but calculating the gradient is time-consuming, so it will
take us a very long time to get to the bottom.
Fig. 30:
jeremyjordan
Cost function
A Loss Function (Error function) tells us “how good” our model is at making predictions
for a given set of parameters. The cost function has its own curve and its own gradients.
The slope of this curve tells us how to update our parameters to make the model more
accurate.
m : 1
b : 1
m_deriv : 0
b_deriv : 0
data_length : length(X)
loop i : 1 --> number_iterations:
loop i : 1 -> data_length :
m_deriv : m_deriv -X[i] *
((m*X[i] +b) - Y [ i ] )
b_deriv : b_deriv - ((m*X[i] +
b) - Y [ i ] )
m : m - (m_deriv / data_length) * learning_rate b :
b - (b_deriv / data_length) * learning_rate
return m, b
There are three variants of gradient descent, which differ in how much data we use to
compute the gradient of the objective function. Depending on the amount of data, we make
a trade-off between the accuracy of the parameter update and the time it takes to perform
an update.
Batch gradient descent, known also as Vanilla gradient descent, computes the gradient of
the cost function with respect to the parameters 𝜃for the entire training dataset :
𝜃= 𝜃− 𝜂· ∇𝜃𝐽 (𝜃)
As we need to calculate the gradients for the whole dataset to perform just one update,
batch gradient descent can be very slow and is intractable for datasets that don’t fit in
memory. Batch gradient descent also doesn’t allow us to update our model online.
Stochastic gradient descent (SGD) in contrast performs a parameter update for each
training example �(�)and label �(�)
• Choose an initial vector of parameters � and learning rate 𝜂.
• Repeat until an approximate minimum is obtained:
Mini-batch gradient descent finally takes the best of both worlds and performs an update
for every mini-batch of n training examples:
Fig. 31:
Wikipedia
• reduces the variance of the parameter updates, which can lead to more stable con-
vergence.
• can make use of highly optimized matrix optimizations common to state-of-the-art
deep learning libraries that make computing the gradient very efficient. Common
mini-batch sizes range between 50 and 256, but can vary for different applications.
Mini-batch gradient descent is typically the algorithm of choice when training a neural
network.
Vanilla mini-batch gradient descent, however, does not guarantee good convergence, but
offers a few challenges that need to be addressed:
• Choosing a proper learning rate can be difficult. A learning rate that is too small leads
to painfully slow convergence, while a learning rate that is too large can hinder
conver- gence and cause the loss function to fluctuate around the minimum or even
to diverge.
• Learning rate schedules try to adjust the learning rate during training by e.g. an-
nealing, i.e. reducing the learning rate according to a pre-defined schedule or when
the change in objective between epochs falls below a threshold. These schedules and
thresh- olds, however, have to be defined in advance and are thus unable to adapt to a
dataset’s characteristics.
• Additionally, the same learning rate applies to all parameter updates. If our data is
sparse and our features have very different frequencies, we might not want to update
all of them to the same extent, but perform a larger update for rarely occurring
features.
• Another key challenge of minimizing highly non-convex error functions common for
neural networks is avoiding getting trapped in their numerous suboptimal local
min- ima. These saddle points (local minimas) are usually surrounded by a plateau
of the same error, which makes it notoriously hard for SGD to escape, as the
gradient is close to zero in all dimensions.
In the following, we will outline some algorithms that are widely used by the deep
learning community to deal with the aforementioned challenges.
Momentum
SGD has trouble navigating ravines (areas where the surface curves much more steeply in
one dimension than in another), which are common around local optima. In these
scenarios, SGD oscillates across the slopes of the ravine while only making hesitant
progress along the bottom towards the local optimum as in the image below.
Source
No momentum: moving toward local largest gradient create
oscillations. With momentum: accumulate velocity to avoid
oscillations.
238 Chapter 5. Machine Learning
Statistics and Machine Learning in Python, Release 0.3
beta
Fig. 32:
Wikipedia
Momentum is a method that helps accelerate SGD in the relevant direction and dampens
oscillations as can be seen in image above. It does this by adding a fraction :math:‘gamma‘
of the update vector of the past time step to the current update vector
�� = 𝜌��−1 + ∇𝜃𝐽
(5.52
(𝜃)
)
𝜃= 𝜃− ��
vx = 0
while True:
dx =gradient(J, x)
vx =rho * vx +dx
x -= learning_rate *
vx
Note: The momentum term :math:‘rho‘ is usually set to 0.9 or a similar value.
Essentially, when using momentum, we push a ball down a hill. The ball accumulates mo-
mentum as it rolls downhill, becoming faster and faster on the way (until it reaches its
terminal velocity if there is air resistance, i.e. :math:‘rho‘ <1 ).
The same thing happens to our parameter updates: The momentum term increases for
dimen- sions whose gradients point in the same directions and reduces updates for
dimensions whose gradients change directions. As a result, we gain faster convergence
and reduced oscillation.
• Added element-wise scaling of the gradient based on the historical sum of squares in
each dimension.
• “Per-parameter learning rates” or “adaptive learning rates”
grad_squared = 0
while True:
dx =gradient(J, x)
grad_squared +=dx * dx
x -= learning_rate * dx /
(np.sqrt(grad_squared)
• +Progress
1e-7) along “steep” directions is damped.
• Progress along “flat” directions is accelerated.
• Problem: step size over long time = > Decays to
zero.
However, a ball that rolls down a hill, blindly following the slope, is highly
unsatisfactory. We’d like to have a smarter ball, a ball that has a notion of where it is
going so that it knows to slow down before the hill slopes up again. Nesterov accelerated
gradient (NAG) is a way to give our momentum term this kind of prescience. We know
that we will use our momentum term 𝛾��−1to move the parameters 𝜃.
Computing 𝜃− 𝛾��−1thus gives us an approximation of the next position of the parameters
(the gradient is missing for the full update), a rough idea where our parameters are going
to be. We can now effectively look ahead by calculating the gradient not w.r.t. to our
current parameters 𝜃but w.r.t. the approximate future position of our parameters:
�� = 𝛾��−1+ 𝜂∇𝜃𝐽(𝜃−
(5.53
𝛾��−1)
)
𝜃= 𝜃− ��
Again, we set the momentum term 𝛾 to a value of around 0.9. While Momentum first com-
putes the current gradient and then takes a big jump in the direction of the updated
accumulated gradient , NAG first makes a big jump in the direction of the previous ac-
cumulated gradient, measures the gradient and then makes a correction, which results in
the complete NAG update. This anticipatory update prevents us from going too fast and
results in increased responsiveness, which has significantly increased the performance of
RNNs on a number of tasks
Adam
Adaptive Moment Estimation (Adam) is a method that computes adaptive learning rates
for each parameter. In addition to storing an exponentially decaying average of past
squared gradients :math:‘v_t‘, Adam also keeps an exponentially decaying average of past
gradients
:math:‘m_t‘, similar to momentum. Whereas momentum can be seen as a ball running
down a slope, Adam behaves like a heavy ball with friction, which thus prefers flat minima
in the error surface. We compute the decaying averages of past and past squared gradients
��and �� respectively as follows:
As ��and �� are initialized as vectors of 0’s, the authors of Adam observe that they are
biased towards zero, especially during the initial time steps, and especially when the decay
rates are small (i.e. 𝛽1 and 𝛽2 are close to 1). They counteract these biases by computing
bias-corrected first and second moment estimates:
��
�ˆ
�= (5.55
1 − 𝛽1�
)
�
�ˆ
� (5.56
1 − 𝛽2�
�
= )
SIX
DEEP LEARNING
1. Backpropagation
1. Course outline:
Y = max(XW(1) , 0)W ( 2 )
A fully-connected ReLU network with one hidden layer and no biases, trained to predict y
from x using Euclidean error.
Chaine rule
Backward: compute gradient of the loss given each parameters vectors applying chaine rule
from the loss downstream to the parameters:
For � (2 ) :
243
Statistics and Machine Learning in Python, Release 0.3 beta
𝜕𝐿 𝜕𝐿 𝜕�(2)
= (6.1
𝜕�(2) 𝜕�(2) 𝜕�(2) )
=2(�(2) − (6.2
�)ℎ(1) )
For
� ( 1) :
Regular derivative:
𝜕 = �∈R
𝜕�
�
If � changes by a small amount, how
much will � change?
• Vector to Scalar: � ∈ R𝑁 , � ∈ R, �
Derivative 𝜕� ∈ R𝑁
is Gradient of partial derivative:
𝜕�
∈ R𝑁
⎡ ⎤
𝜕𝜕�
�1
⎢ . ⎥
𝜕�
= ∇� � ⎢⎢ 𝜕 ⎥
𝜕� �⎥
(6.5
𝜕� �
.
= ⎢ ⎥ )
⎣ . ⎦
𝜕�
𝜕�𝑁
For each element �� of �, if it changes by a small amount then how much will y
change?
•Vector to Vector: � ∈ R𝑁 , � ∈ R 𝑀
Backpropagation summary
Backpropagation algorithm in a graph: 1. Forward pass, for each node compute local partial
derivatives of ouput given inputs 2. Backward pass: apply chain rule from the end to each
parameters - Update parameter with gradient descent using the current upstream gradient
and the current local gradient - Compute upstream gradient for the backward nodes
Think locally and remember that at each node: - For the loss the gradient is the error - At
each step, the upstream gradient is obtained by multiplying the upstream gradient (an
error) with the current parameters (vector of matrix). - At each step, the current local
gradient equal the input, therfore the current update is the current upstream gradient
import num
time the py as np
input.
import matplotlib.pyplot as
plt import seaborn as sns
import
sklearn.model_selection
# Y ='petal_length', 'petal_width'; X =
'sepal_length', 'sepal_width')
X_iris = np.asarray(df.loc[:, ['sepal_length', 'sepal_width']], dtype=np.float32)
Y_iris = np.asarray(df.loc[:, ['petal_length', 'petal_width']], dtype=np.float32)
label_iris =np.asarray(df.species_n, dtype=int)
# Scale
from sklearn.preprocessing import StandardScaler
scalerx, scalery =StandardScaler(), StandardScaler()
X_iris =scalerx.fit_transform(X_iris)
Y_iris =StandardScaler().fit_transform(Y_iris)
˓→SettingWithCopyWarning:
This implementation uses numpy to manually compute the forward pass, loss, and
backward pass.
# X=X_iris_tr; Y=Y_iris_tr; X_val=X_iris_val; Y_val=Y_iris_val
W1 =np.random.randn(D_in, H) W2 =
np.random.randn(H, D_out)
losses_tr, losses_val =l i s t ( ) , l i s t ( )
learning_rate = l r
for t in range(nite):
# Forward pass: compute predicted y z1 =
X.dot(W1) (continues on next
page)
# Update weights
W1 -= learning_rate * grad_w1
W2 -= learning_rate * grad_w2
losses_tr.append(loss)
losses_val.append(loss_val)
i f t % 10 == 0:
print(t, loss,
loss_val)
source
Numpy is a great framework, but it cannot utilize GPUs to accelerate its numerical compu-
tations. For modern deep neural networks, GPUs often provide speedups of 50x or
greater, so unfortunately numpy won’t be enough for modern deep learning. Here we
introduce the most fundamental PyTorch concept: the Tensor. A PyTorch Tensor is
conceptually identical to a numpy array: a Tensor is an n-dimensional array, and PyTorch
provides many functions for op- erating on these Tensors. Behind the scenes, Tensors can
keep track of a computational graph and gradients, but they’re also useful as a generic tool
for scientific computing. Also unlike numpy, PyTorch Tensors can utilize GPUs to
accelerate their numeric computations. To run a PyTorch Tensor on GPU, you simply need
to cast it to a new datatype. Here we use PyTorch Tensors to fit a two-layer network to
random data. Like the numpy example above we need to manually implement the forward
and backward passes through the network:
import torch
dtype = torch.float
device = torch.device("cpu")
# device =torch.device("cuda:0") # Uncomment this to run on GPU
losses_tr, losses_val =l i s t ( ) , l i s t ( )
learning_rate = l r
for t in range(nite):
# Forward pass: compute predicted y
z1 =X.mm(W1)
h1 = z1.clamp(min=0)
y_pred = h1.mm(W2)
losses_tr.append(loss)
losses_val.append(loss_val)
i f t % 10 == 0:
print(t, loss,
loss_val)
[<matplotlib.lines.Line2D at 0x7f960033c470>,
<matplotlib.lines.Line2D at 0x7f960033c5c0>]
source
A fully-connected ReLU network with one hidden layer and no biases, trained to predict y
from x by minimizing squared Euclidean distance. This implementation computes the
forward pass using operations on PyTorch Tensors, and uses PyTorch autograd to compute
gradients. A PyTorch Tensor represents a node in a computational graph. If x is a Tensor
that has x. requires_grad=True then x.grad is another Tensor holding the gradient of x
with respect to some scalar value.
import torch
dtype = torch.float
device = torch.device("cpu")
# device =torch.device("cuda:0") # Uncomment this to run on GPU
(continues on next
page)
losses_tr, losses_val =l i s t ( ) , l i s t ( )
learning_rate = l r
for t in range(nite):
# Forward pass: compute predicted y
using operations on Tensors; these
# are exactly the same operations we
used to compute the forward pass
using
# Tensors, but we do not need to keep references to intermediate values since # we
are not implementing the backward pass by hand.
y_pred = X.mm(W1).clamp(min=0).mm(W2)
# Use autograd to compute the backward pass. This call will compute the #
gradient of loss with respect to a l l Tensors with requires_grad=True.
# After this call w1.grad and w2.grad will be Tensors holding the gradient #
of the loss with respect to w1 and w2 respectively.
loss.backward()
y_pred = X_val.mm(W1).clamp(min=0).mm(W2)
losses_tr.append(loss.item())
losses_val.append(loss_val.item())
lr=1e-4, nite=50)
plt.plot(np.arange(len(losses_tr)), losses_tr, "-b", np.arange(len(losses_val)), losses_
˓ → val, "-r")
0 8307.1806640625 2357.994873046875
10 111.97289276123047 250.04209899902344
20 65.83244323730469 201.63694763183594
30 53.70908737182617 183.17051696777344
40 48.719329833984375 173.3616943359375
[<matplotlib.lines.Line2D at 0x7f95ff2ad978>,
<matplotlib.lines.Line2D at 0x7f95ff2adac8>]
source
This implementation uses the nn package from PyTorch to build the network. PyTorch
autograd makes it easy to define computational graphs and take gradients, but raw
autograd can be a bit too low-level for defining complex neural networks; this is where
the nn package can help. The nn package defines a set of Modules, which you can think of
as a neural network layer that has produces output from input and may have some
trainable weights.
import torch
X =torch.from_numpy(X) Y =
torch.from_numpy(Y)
X_val =torch.from_numpy(X_val)
Y_val = torch.from_numpy(Y_val)
# The nn package also contains definitions of popular loss functions; in this # case
we will use Mean Squared Error (MSE) as our loss function.
loss_fn = torch.nn.MSELoss(reduction='sum')
losses_tr, losses_val =l i s t ( ) , l i s t ( )
learning_rate = l r
for t in range(nite):
# Forward pass: compute predicted y by passing x to the model. Module objects #
override the call operator so you can call them like functions. When
# doing so you pass a Tensor of input data to the Module and i t produces
# a Tensor of output data.
y_pred = model(X)
# Compute and print loss. We pass Tensors containing the predicted and true #
values of y, and the loss function returns a Tensor containing the
# loss.
loss =loss_fn(y_pred, Y)
# Backward pass: compute gradient of the loss with respect to a l l the learnable #
parameters of the model. Internally, the parameters of each Module are stored # in
Tensors with requires_grad=True, so this call will compute gradients for
# a l l learnable parameters in the model.
loss.backward()
6.1. Backpropagation
253
Statistics and Machine Learning in Python, Release 0.3 beta
i f t % 10 == 0:
print(t, loss.item(),
loss_val.item())
losses_tr.append(loss.item())
losses_val.append(loss_val.item())
[<matplotlib.lines.Line2D at 0x7f95ff296668>,
<matplotlib.lines.Line2D at 0x7f95ff2967b8>]
This implementation uses the nn package from PyTorch to build the network. Rather than
man- ually updating the weights of the model as we have been doing, we use the
optim package to
define an Optimizer that will update the weights for us. The optim package defines many
op- timization algorithms that are commonly used for deep learning, including
SGD+momentum, RMSProp, Adam, etc.
import torch
X =torch.from_numpy(X) Y =
torch.from_numpy(Y)
X_val =torch.from_numpy(X_val)
Y_val = torch.from_numpy(Y_val)
# Use the nn package to define our model and loss function. model
= torch.nn.Sequential(
torch.nn.Linear(D_in, H),
torch.nn.ReLU(),
torch.nn.Linear(H, D_out),
)
loss_fn =
torch.nn.MSELoss(reduction='su
m')
losses_tr, losses_val =l i s t ( ) ,
list()
# Before the backward pass, use the optimizer object to zero a l l of the #
gradients for the variables i t will update (which are the learnable
# weights of the model). This is because by default, gradients are
# accumulated in buffers( i . e , not overwritten) whenever .backward()
# i s called. Checkout docs of torch.autograd.backward for more details.
optimizer.zero_grad()
i f t % 10 == 0:
print(t, loss.item(),
loss_val.item())
losses_tr.append(loss.item())
losses_val.append(loss_val.item())
[<matplotlib.lines.Line2D at 0x7f95ff200080>,
<matplotlib.lines.Line2D at 0x7f95ff2001d0>]
1. Course outline:
Pytorch
• WWW tutorials
• github tutorials
•github examples
# Device configuration
cuda:0
device =torch.device('cuda:0' i f torch.cuda.is_available() else 'cpu')
print(device)
Hyperparameter
s
train_loader =torch.utils.data.DataLoader(
datasets.MNIST('data', train=True,
download=True,
transform=transforms. Compose(
[ transforms.ToTensor(),
transforms.Normalize((0.1307,)
, (0.3081,)) # Mean and Std
of␣
˓→the MNIST dataset
])),
batch_size=batch_size_train, shuffle=True)
val_loader = torch.utils.data.DataLoader(
datasets.MNIST('data', train=False, transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,)) # Mean and Std of the
MNIST dataset
])),
batch_size=batch_size_test, shuffle=True)
return train_loader, val_loader
So one test data batch is a tensor of shape: . This means we have 1000 examples of
28x28 pixels
in grayscale (i.e. no rgb channels, hence the one). We can plot some of them using
def show_data_label_prediction(data, y_true, y_pred=None, shape=(2, 3)):
matplotlib.
y_pred =[None] * len(y_true) i f y_pred is None else y_pred
fig =plt.figure()
for i in range(np.prod(shape)):
plt.subplot(*shape, i+1)
plt.tight_layout()
plt.imshow(data[i][0],
cmap='gray',
interpolation='none')
plt.title("True: { } Pred: {}".format(y_true[i], y_pred[i]))
plt.xticks([])
plt.yticks([])
show_data_label_prediction(data=example_data,
y_true=example_targets, y_pred=None,␣
˓→shape=(2, 3))
𝑓(�) = 𝜎(�𝑇�)
print(X_train.shape, y_train.shape)
import matplotlib.pyplot as
plt import numpy as np
scaler =StandardScaler()
X_train =scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
coef =clf.coef_.copy()
plt.figure(figsize=(10, 5))
scale =np.abs(coef).max()
for i in range(10):
l1_plot =plt.subplot(2, 5, i +1)
l1_plot.imshow(coef[i].reshape(28, 28), interpolation='nearest',
cmap=plt.cm.RdBu, vmin=-scale, vmax=scale)
l1_plot.set_xticks(())
l1_plot.set_yticks(())
l1_plot.set_xlabel('Class %i' % i )
(continues on next
page)
plt.show()
mlp.fit(X_train, y_train)
print("Training set score: %f" % mlp.score(X_train, y_train))
print("Test set score: %f" % mlp.score(X_test, y_test))
fi g, axes =plt.subplots(4, 4)
# use global min / max to ensure a l l
weights are shown on the same scale
vmin, vmax =mlp.coefs_[0].min(),
mlp.coefs_[0].max()
for coef, ax in zip(mlp.coefs_[0].T, axes.ravel()):
ax.matshow(coef.reshape(28, 28), cmap=plt.cm.gray, vmin=.5 * vmin,
vmax=.5 * vmax)
ax. set_xticks(())
ax. set_yticks(())
plt.show()
/home/ed203246/anaconda3/lib/python3.7/site-packages/sklearn/neural_network/multilayer_
˓→perceptron.py:562: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (5)␣
˓→reached and the optimization hasn't converged yet.
% self.max_iter, ConvergenceWarning)
MLP with
pytorch
class TwoLayerMLP(nn.Module):
best_model_wts =copy.deepcopy(model.state_dict())
best_acc = 0.0
running_loss =0.0
running_corrects = 0
# forward
# track history i f only in train
with torch.set_grad_enabled(phase =='train'):
outputs = model(inputs)
_, preds =torch.max(outputs, 1)
loss =criterion(outputs, labels)
# statistics
running_loss +=loss.item() * inputs.size(0)
running_corrects +=torch.sum(preds == labels.data)
#nsamples =dataloaders[phase].dataset.data.shape[0]
epoch_loss =running_loss / nsamples
epoch_acc =running_corrects.double() / nsamples
losses[phase].append(epoch_loss)
accuracies[phase].append(epoch_acc)
i f log_interval is not None and epoch % log_interval ==0:
print('{} Loss: { : . 4 f } Acc: {:.2f}%'.format(
phase, epoch_loss, 100 * epoch_acc))
print(next(model.parameters()).is_cuda)
torch.save(model.state_dict(), 'models/mod-%s.pth' % model. class . name )
True
torch.Size([50, 784])
torch.Size([50])
torch.Size([10, 50])
torch.Size([10])
Total number of parameters =39760
Epoch 0/0
Use the model to make new predictions. Consider the device, ie, load data on
device
example_data.to(device) from prediction, then move back to cpu example_data.cpu().
batch_idx, (example_data, example_targets) =next(enumerate(val_loader))
example_data = example_data.to(device)
with torch.no_grad():
output = model(example_data).cpu()
example_data = example_data.cpu()
(continues on next
page)
# Softmax predictions
preds =
output.argmax(dim=1)
print("Output shape=",
output.shape, "label
shape=", preds.shape)
print("Accuracy ={:.2f}
%".format((example_tar
gets ==
preds).sum().item() *
Output
100. /␣shape= torch.Size([10000, 10]) label shape= torch.Size([10000])
Accuracy = 91.25%
˓→len(example_targets)
))
show_data_label_predicti
on(data=example_data,
y_true=example_targets,
y_pred=preds,␣
˓→shape=(3, 4))
Plot missclassified
samples
errors =example_targets != preds
#print(errors, np.where(errors))
print("Nb errors ={ } , (Error rate
={:.2f}%)".format(errors.sum(),
100 * errors.sum().
˓→ item() / len(errors)))
err_idx = np.where(errors)[0]
show_data_label_prediction(d
ata=example_data[err_idx],
Nb errors =875, (Error rate = 8.75%)
y_true=example_targets[err_i
dx],
y
_
pr
e
d
=
266 pr Chapter 6. Deep Learning
e
d
s[
Statistics and Machine Learning in Python, Release 0.3
beta
Continue training from checkpoints: reload the model and run 10 more
epochs
model =TwoLayerMLP(D_in, 50, D_out)
model.load_state_dict(torch.load('models/mod-%s.pth' % model. class . name ) )
model.to(device)
Epoch 0/9
Epoch 2/9
Epoch 4/9
Epoch 6/9
(continues on next
page)
Epoch 8/9
• Define a MultiLayerMLP([D_in, 512, 256, 128, 64, D_out]) class that take the size of
the layers as parameters of the constructor.
• Add some non-linearity with relu acivation function
class MLP(nn.Module):
Epoch 0/9
Epoch 2/9
Epoch 4/9
Epoch 6/9
Epoch 8/9
Reduce the size of the training dataset by considering only 10 minibatche for size16.
train_size =10 * 16
# Stratified sub-sampling
targets =train_loader.dataset.targets.numpy()
nclasses = len(set(targets))
indices =
np.concatenate([np.random.choice(np.where(targ
ets ==lab)[0], int(train_size /␣
˓→nclasses),replace=False)
Train size= 160 Train label count= {0: 16, 1: 16, 2: 16, 3: 16, 4: 16, 5: 16, 6: 16, 7:␣
˓→16, 8: 16, 9: 16}
Batch sizes= [16, 16, 16, 16, 16, 16, 16, 16, 16, 16]
Datasets shape {'train': torch.Size([60000, 28, 28]), 'val': torch.Size([10000, 28, 28])}
N input features 784 N output 10
Epoch 0/99
Epoch 20/99
Epoch 40/99
Epoch 60/99
Epoch 80/99
Epoch 0/99
Epoch 20/99
Epoch 40/99
Epoch 60/99
Epoch 80/99
(continues on next
page)
The CIFAR-10 dataset consists of 60000 32x32 colour images in 10 classes, with 6000
images per class. There are 50000 training images and 10000 test images.
The dataset is divided into five training batches and one test batch, each with 10000
images. The test batch contains exactly 1000 randomly-selected images from each class.
The training batches contain the remaining images in random order, but some training
batches may contain more images from one class than another. Between them, the training
batches contain exactly 5000 images from each class.
Here are the classes in the dataset, as well as 10 random images from each: - airplane
- automobile
- bird
- cat
- deer
- dog
- frog
- horse
- ship
- truck
import numpy as
np import torch
import torch.nn as
nn import
torchvision
import
torchvision.transfor
ms as transforms
# Device
configuration
device =
torch.device('cuda'
if
torch.cuda.is_availa
ble() else 'cpu')
# Hyper-parameters
num_epochs =5
learning_rate =
0.001
# CIFAR-10 dataset
train_dataset =
torchvision.datasets.CIFAR10(root='dat
a/',
train=True,
transform=transform
, download=True)
val_dataset = torchvision.datasets.CIFAR10(root='data/',
train=False,
transform=transforms.ToTensor())
# Data loader
train_loader = torch.utils.data.DataLoader(dataset=train_dataset,
batch_size=100
,
shuffle=True)
val_loader =
torch.utils.data.DataLoader(dataset=val_dataset,
batch_size=100
,
shuffle=False)
274 Chapter 6. Deep Learning
# Put together train and val
dataloaders =dict(train=train_loader, val=val_loader)
Statistics and Machine Learning in Python, Release 0.3
beta
Epoch 0/49
Epoch 10/49
Epoch 20/49
Epoch 30/49
Epoch 40/49
1. Outline
2. Architecures
3. Train and test functions
4. CNN models
5. MNIST
6.CIFAR-10
Sources:
Deep learning -
cs231n.stanford.edu CNN -
Stanford cs231n
Pytorch - WWW tutorials - github -
tutorials - github examples MNIST
MNIST and pytorch: - MNIST
nextjournal.com/gkoehler/pytorch-mnist
2. Architectures github/pytorch/examples - MNIST
kaggle
Sources:
• cv-tricks.com
• [zhenye-na.github.io(]https://zhenye-na.github.io/2018/12/01/cnn-deep-leearning-
ai- week2.html)
LeNet
Fig. 1:
LeNet
AlexNet
Fig. 2: AlexNet
• Deeper, bigger,
• Featured Convolutional Layers stacked on top of each other (previously it was
common to only have a single CONV layer always immediately followed by a POOL
layer).
• ReLu(Rectified Linear Unit) for the non-linear part, instead of a Tanh or Sigmoid.
The advantage of the ReLu over sigmoid is that it trains much faster than the latter
because the derivative of sigmoid becomes very small in the saturating region and
therefore the updates to the weights almost vanish. This is called vanishing gradient
problem.
• Dropout: reduces the over-fitting by using a Dropout layer after every FC layer.
Dropout layer has a probability,(p), associated with it and is applied at every neuron
of the response map separately. It randomly switches off the activation with the
probability p.
Why does DropOutneural
6.3. Convolutional work?network 277
Statistics and Machine Learning in Python, Release 0.3 beta
Fig. 3: AlexNet
architecture
Fig. 4:
Dropout
The idea behind the dropout is similar to the model ensembles. Due to the dropout layer,
dif- ferent sets of neurons which are switched off, represent a different architecture and
all these different architectures are trained in parallel with weight given to each subset
and the sum- mation of weights being one. For n neurons attached to DropOut, the
number of subset ar- chitectures formed is 2^n. So it amounts to prediction being
averaged over these ensembles of models. This provides a structured model
regularization which helps in avoiding the over- fitting. Another view of DropOut being
helpful is that since neurons are randomly chosen, they tend to avoid developing co-
adaptations among themselves thereby enabling them to develop meaningful features,
independent of others.
• Data augmentation is carried out to reduce over-fitting. This Data augmentation
includes mirroring and cropping the images to increase the variation in the training
data-set.
GoogLeNet. (Szegedy et al. from Google 2014) was a Convolutional Network . Its main
contri- bution was the development of an
• Inception Module that dramatically reduced the number of parameters in the network
(4M, compared to AlexNet with 60M).
• There are also several followup versions to the GoogLeNet, most recently Inception-
v4.
VGGNet. (Karen Simonyan and Andrew Zisserman 2014)
• 16 CONV/FC layers and, appealingly, features an extremely homogeneous
architecture.
• Only performs 3x3 convolutions and 2x2 pooling from the beginning to the end.
Replace large kernel-sized filters(11 and 5 in the first and second convolutional
layer, respectively) with multiple 3X3 kernel-sized filters one after another.
With a given receptive field(the effective area size of input image on which output
depends), multiple stacked smaller size kernel is better than the one with a larger size
kernel because multiple non-linear layers increases the depth of the network which
enables it to learn more complex features, and that too at a lower cost. For example, three
3X3 filters on top of each other with stride 1 ha a receptive size of 7, but the number of
parameters involved is 3 (9^2) in comparison to 49^2 parameters of kernels with a
size of 7.
Fig. 6:
VGGNet
Fig. 7: VGGNet
architecture
Fig. 8: ResNet
block
Fig. 9: ResNet 18
• Skip connections
• Batch normalization.
• State of the art CNN models and are the default choice (as of May 10, 2016). In
particular, also see more
• Recent developments that tweak the original architecture from Kaiming He et al.
Identity Mappings in Deep Residual Networks (published March 2016).
Models in pytorch
4. Train function
%matplotlib inline
import os
import numpy as
np import torch
import torch.nn
as nn
import
torch.optim as
optim
from torch.optim
import
lr_scheduler
import
torchvision (continues on next
import torchvision.transforms as page)
transforms from torchvision import
models
282 Chapter 6. Deep Learning
#
from pathlib import Path
import matplotlib.pyplot as plt
Statistics and Machine Learning in Python, Release 0.3
beta
# %load train_val_model.py
import numpy as
np import torch
import time
import copy
best_model_wts =copy.deepcopy(model.state_dict())
best_acc = 0.0
running_loss =0.0
running_corrects = 0
# forward
# track history i f only in train
with torch.set_grad_enabled(phase =='train'):
outputs = model(inputs)
_, preds =torch.max(outputs, 1)
loss =criterion(outputs, labels)
# statistics
running_loss +=loss.item() * inputs.size(0)
running_corrects +=torch.sum(preds == labels.data)
#nsamples =dataloaders[phase].dataset.data.shape[0]
epoch_loss =running_loss / nsamples
epoch_acc =running_corrects.double() / nsamples
losses[phase].append(epoch_loss)
accuracies[phase].append(epoch_acc)
i f log_interval is not None and epoch % log_interval ==0:
print('{} Loss: { : . 4 f } Acc: {:.2f}%'.format(
phase, epoch_loss, 100 * epoch_acc))
LeNet-5
class
LeNet5(nn.Module):
"""
layers: (nb channels in input layer,
nb channels in 1rst conv, nb
channels in 2nd conv,
nb neurons for 1rst FC: TO BE TUNED,
nb neurons for 2nd FC, nb neurons
for 3rd FC, (continues on next
page)
x =x.view(-1, self.layers[3]) x =
F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return F.log_softmax(x, dim=1)
class MiniVGGNet(torch.nn.Module):
def i ni t (self , layers=(1, 16, 32, 1024, 120, 84, 10), debug=False):
super(MiniVGGNet, self ) . init ( )
self.layers =layers
self.debug =debug
# Conv block 1
self.conv11 =
nn.Conv2d(in_chann
els=layers[0],
out_channels=layers
[1], kernel_
˓→size=3,
stride=1, padding=0,
bias=True)
self.conv12 =
nn.Conv2d(in_channels=la
yers[1],
out_channels=layers[1],
kernel_
˓→size=3,
stride=1, padding=0,
bias=True)
x = F.relu(self.conv21(x))
x = F.relu(self.conv22(x))
x =F.max_pool2d(x, 2)
i f self.debug:
print("### DEBUG: Shape
of last convnet=",
x.shape[1:], " . FC
size=", np.
˓→prod(x.shape[1:]))
x =x.view(-1,
self.layers[3]) x =
F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return
ResNet-like Model: F.log_softmax(x,
dim=1)
#
# # An implementation of https://arxiv.org/pdf/1512.03385.pdf
#
# See section 4.2 for the model architecture on CIFAR-10
#
# Some part of the code was referenced from below
# # https://github.com/pytorch/vision/blob/master/torchvision/models/resnet.py #
#
# import torch.nn as nn
# 3x3 convolution
def conv3x3(in_channels, out_channels, stride=1):
return nn.Conv2d(in_channels, out_channels, kernel_size=3,
stride=stride, padding=1, bias=False)
# Residual block
class ResidualBlock(nn.Module):
def i ni t ( self , in_channels, out_channels, stride=1, downsample=None):
super(ResidualBlock, self ). in it ( )
self.conv1 =conv3x3(in_channels, out_channels, stride)
self.bn1 =nn.BatchNorm2d(out_channels)
self.relu =nn.ReLU(inplace=True)
self.conv2 =conv3x3(out_channels,
out_channels)
self.bn2 =nn.BatchNorm2d(out_channels)
self.downsample = downsample
# ResNet
class ResNet(nn.Module):
def i ni t (self , block, layers, num_classes=10):
super(ResNet, self ). in it ( ) self.in_channels
= 16
self.conv =conv3x3(3, 16)
self.bn =nn.BatchNorm2d(16) self.relu
=nn.ReLU(inplace=True)
self.layer1 =self.make_layer(block,
16, layers[0])
self.layer2 =self.make_layer(block,
32, layers[1], 2)
self.layer3 =self.make_layer(block, 64, layers[2], 2)
self.avg_pool = nn.AvgPool2d(8)
self.fc =nn.Linear(64, num_classes)
train_loader =torch.utils.data.DataLoader(
datasets.MNIST('data', train=True,
download=True,
transform=transforms.Compose([ transforms.ToT
ensor(), transforms.Normalize((0.1307,),
(0.3081,))
])),
batch_size=batch_size_train, shuffle=True)
test_loader = torch.utils.data.DataLoader(
datasets.MNIST('data', train=False, transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])),
batch_size=batch_size_test, shuffle=True)
return train_loader, test_loader
LeNet
Dry run in debug mode to get the shape of the last convnet
layer.
model =LeNet5((1, 6, 16, 1, 120, 84, 10), debug=True)
batch_idx, (data_example, target_example) =next(enumerate(train_loader))
print(model)
_ = model(data_example)
LeNet5(
(conv1): Conv2d(1, 6, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
(conv2): Conv2d(6, 16, kernel_size=(5, 5), stride=(1, 1))
(fc1): Linear(in_features=1, out_features=120, bias=True)
(fc2): Linear(in_features=120, out_features=84, bias=True)
(fc3): Linear(in_features=84, out_features=10, bias=True)
)
### DEBUG: Shape of last convnet= torch.Size([16, 5, 5]) . FC
size= 400
Set First FC layer to
400
model =LeNet5((1, 6, 16, 400, 120, 84, 10)).to(device) optimizer =
optim.SGD(model.parameters(), lr=0.01, momentum=0.5) criterion =
nn.NLLLoss()
Epoch 2/4
Epoch 4/4
MiniVGGNet
print(model)
_ = model(data_example)
MiniVGGNet(
(conv11): Conv2d(1, 16, kernel_size=(3, 3), stride=(1, 1))
(conv12): Conv2d(16, 16, kernel_size=(3, 3), stride=(1, 1))
(conv21): Conv2d(16, 32, kernel_size=(3, 3), stride=(1, 1))
(conv22): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(fc1): Linear(in_features=1, out_features=120, bias=True)
(fc2): Linear(in_features=120, out_features=84, bias=True)
(fc3): Linear(in_features=84, out_features=10, bias=True)
)
### DEBUG: Shape of last convnet= torch.Size([32, 5, 5]) . FC size= 800
torch.Size([16, 1, 3, 3])
torch.Size([16])
torch.Size([16, 16, 3, 3])
torch.Size([16])
torch.Size([32, 16, 3, 3])
torch.Size([32])
torch.Size([32, 32, 3, 3])
torch.Size([32])
torch.Size([120, 800])
torch.Size([120])
torch.Size([84, 120])
torch.Size([84])
torch.Size([10, 84])
torch.Size([10])
Total number of parameters =123502
Epoch 0/4
Epoch 2/4
Epoch 4/4
Reduce the size of the training dataset by considering only 10 minibatche for size16.
train_size =10 * 16
# Stratified sub-sampling
targets =train_loader.dataset.targets.numpy()
nclasses = len(set(targets))
indices =
np.concatenate([np.random.choice(np.where(targ
ets ==lab)[0], int(train_size /␣
˓→nclasses),replace=False)
Train size= 160 Train label count= {0: 16, 1: 16, 2: 16, 3: 16, 4: 16, 5: 16, 6: 16, 7:␣
˓→16, 8: 16, 9: 16}
Batch sizes= [16, 16, 16, 16, 16, 16, 16, 16, 16, 16]
Datasets shape {'train': torch.Size([60000, 28, 28]), 'val': torch.Size([10000, 28, 28])}
N input features 784 N output 10
LeNet
5
model =LeNet5((1, 6, 16, 400, 120, 84, D_out)).to(device)
optimizer =optim.SGD(model.parameters(), lr=0.01, momentum=0.5)
criterion = nn.NLLLoss()
Epoch 0/99
Epoch 20/99
Epoch 40/99
Epoch 60/99
Epoch 80/99
MiniVGGNet
Epoch 0/99
Epoch 20/99
Epoch 40/99
Epoch 60/99
Epoch 80/99
(continues on next
page)
import numpy as
np import torch
import torch.nn as
nn import
torchvision
import
torchvision.transfor
ms as transforms
# Device
configuration
device =
torch.device('cuda'
if
torch.cuda.is_availa
ble() else 'cpu') (continues on next
page)
# Hyper-parameters
num_epochs =5
6.3. Convolutional
learning_rate = neural network 295
0.001
# CIFAR-10 dataset
train_dataset =
torchvision.datasets.CIFAR10(root='dat
a/',
train=True,
transform=transform
, download=True)
val_dataset = torchvision.datasets.CIFAR10(root='data/',
train=False,
transform=transforms.ToTensor())
# Data loader
train_loader = torch.utils.data.DataLoader(dataset=train_dataset,
batch_size=100
,
shuffle=True)
val_loader =
torch.utils.data.DataLoader(dataset=val_dataset,
batch_size=100
,
shuffle=False)
LeNe
t
model =LeNet5((3, 6, 16, 1, 120, 84, D_out), debug=True)
batch_idx, (data_example, target_example) =next(enumerate(train_loader))
print(model)
_ = model(data_example)
LeNet5(
(conv1): Conv2d(3, 6, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
(conv2): Conv2d(6, 16, kernel_size=(5, 5), stride=(1, 1))
(fc1): Linear(in_features=1, out_features=120, bias=True)
(fc2): Linear(in_features=120, out_features=84, bias=True)
(fc3): Linear(in_features=84, out_features=10, bias=True)
)
(continues on next
page)
Epoch 5/24
Epoch 10/24
Epoch 15/24
Epoch 20/24
(continues on next
page)
Epoch 0/24
Epoch 5/24
Epoch 10/24
Epoch 15/24
(continues on next
page)
Epoch 20/24
Epoch 0/24
Epoch 5/24
Epoch 10/24
(continues on next
page)
Epoch 15/24
Epoch 20/24
MiniVGGNet
MiniVGGNet(
(conv11): Conv2d(3, 16, kernel_size=(3, 3), stride=(1, 1))
(conv12): Conv2d(16, 16, kernel_size=(3, 3), stride=(1, 1))
(conv21): Conv2d(16, 32, kernel_size=(3, 3), stride=(1, 1))
(conv22): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(fc1): Linear(in_features=1, out_features=120, bias=True)
(fc2): Linear(in_features=120, out_features=84, bias=True)
(fc3): Linear(in_features=84, out_features=10, bias=True)
)
### DEBUG: Shape of last convnet= torch.Size([32, 6, 6]) . FC size= 1152
Epoch 0/24
Epoch 5/24
Epoch 10/24
Epoch 15/24
Epoch 20/24
Adam
Epoch 0/24
Epoch 5/24
Epoch 10/24
Epoch 15/24
Epoch 20/24
(continues on next
page)
ResNe
t
model =ResNet(ResidualBlock, [2, 2, 2], num_classes=D_out).to(device) # 195738 parameters
optimizer =torch.optim.Adam(model.parameters(), lr=0.001)
criterion = nn.NLLLoss()
Epoch 0/24
Epoch 5/24
Epoch 10/24
Epoch 15/24
(continues on next
page)
Epoch 20/24
Sources:
• cs231n @ Stanford
•Sasank
Chilamkurthy Quote
cs231n @ Stanford:
In practice, very few people train an entire Convolutional Network from scratch (with
random initialization), because it is relatively rare to have a dataset of sufficient size.
Instead, it is common to pretrain a ConvNet on a very large dataset (e.g. ImageNet, which
contains 1.2 million images with 1000 categories), and then use the ConvNet either as
an initialization or a fixed feature extractor for the task of interest.
These two major transfer learning scenarios look as follows:
• ConvNet as fixed feature extractor:
– Take a ConvNet pretrained on ImageNet,
– Remove the last fully-connected layer (this layer’s outputs are the 1000 class
scores for a different task like ImageNet)
–Treat the rest of the ConvNet as a fixed feature extractor for the new
dataset. In practice:
– Freeze the weights for all of the network except that of the final fully connected
layer. This last fully connected layer is replaced with a new one with random
weights and only this layer is trained.
• Finetuning the convnet:
fine-tune the weights of the pretrained network by continuing the backpropagation. It is
possi- ble to fine-tune all the layers of the ConvNet
Instead of random initializaion, we initialize the network with a pretrained network, like
the one that is trained on imagenet 1000 dataset. Rest of the training looks as usual.
%matplotlib inline
import os
import numpy as
np import torch
import torch.nn
as nn
import
torch.optim as
optim
from torch.optim
import
lr_scheduler
import
torchvision
import torchvision.transforms as
transforms from torchvision import
models
#
from pathlib import Path
1.
importTraining function as plt
matplotlib.pyplot
# Device configuration
Combine train and test/validation into a single function.
device =torch.device('cuda' i f
Now, let’s write a general else
torch.cuda.is_available() function to train a model. Here, we will illustrate:
'cpu')
• Scheduling the learning rate
• Saving the best model
def
train_val_model(m (continues on next
odel, criterion, page)
optimizer,
6.4. Transfer Learning Tutorial
dataloaders, 305
num_epochs=25,
Statistics and Machine Learning in Python, Release 0.3 beta
since =time.time()
best_model_wts =copy.deepcopy(model.state_dict())
best_acc = 0.0
running_loss =0.0
running_corrects = 0
# forward
# track history i f only in train
with torch.set_grad_enabled(phase =='train'):
outputs = model(inputs)
_, preds =torch.max(outputs, 1)
loss =criterion(outputs, labels)
# statistics
running_loss +=loss.item() * inputs.size(0)
running_corrects +=torch.sum(preds == labels.data)
#nsamples =dataloaders[phase].dataset.data.shape[0]
epoch_loss =running_loss / nsamples
(continues on next page)
losses[phase].append(epoch_loss)
accuracies[phase].append(epoch_acc)
i f log_interval is not None and epoch % log_interval ==0:
print('{} Loss: { : . 4 f } Acc: {:.4f}'.format(
phase, epoch_loss, epoch_acc))
# CIFAR-10 dataset
train_dataset =
torchvision.datasets.CIFAR10(root='dat
a/',
train=True,
transform=transform
, download=True)
test_dataset = torchvision.datasets.CIFAR10(root='data/',
train=False,
transform=transforms.ToTensor())
val_loader =
torch.utils.data.DataLoader(dataset=test_dataset,
batch_size=100
,
shuffle=False)
model_ft =model_ft.to(device)
criterion =nn.CrossEntropyLoss()
# Observe that a l l parameters are
being optimized
optimizer_ft =
optim.SGD(model_ft.parameters(),
lr=0.001, momentum=0.9)
train Loss:
epochs 1.2478 Acc: 0.5603
=np.arange(len(losses['train']))
val Loss: 0.9084 Acc:losses['train'],
_ =plt.plot(epochs, 0.6866 '-b', epochs, losses['val'], '--r')
(continues on next
page)
Epoch 10/24
Epoch 15/24
Epoch 20/24
Adam
optimizer
model_ft =models.resnet18(pretrained=True)
num_ftrs = model_ft.fc.in_features
# Here the size of each output sample is set to 10.
model_ft.fc =nn.Linear(num_ftrs, D_out)
model_ft =model_ft.to(device)
criterion =nn.CrossEntropyLoss()
(continues on next
page)
epochs =np.arange(len(losses['train']))
_ =plt.plot(epochs, losses['train'], '-b', epochs, losses['val'], '--r')
Epoch 0/24
Epoch 5/24
Epoch 10/24
Epoch 15/24
Epoch 20/24
Freeze all the network except the final layer: requires_grad ==False to freeze the
parameters so that the gradients are not computed in backward().
model_conv = torchvision.models.resnet18(pretrained=True)
for param in model_conv.parameters():
param.requires_grad = False
model_conv =model_conv.to(device)
criterion =nn.CrossEntropyLoss()
epochs =np.arange(len(losses['train']))
_ =plt.plot(epochs, losses['train'], '-b', epochs, losses['val'], '--r')
Epoch 0/24
(continues on next
page)
Epoch 10/24
Epoch 15/24
Epoch 20/24
Adam
optimizer
model_conv = torchvision.models.resnet18(pretrained=True)
for param in model_conv.parameters():
param.requires_grad = False
model_conv = model_conv.to(device)
(continues on next
page)
epochs =np.arange(len(losses['train']))
_ =plt.plot(epochs, losses['train'], '-b', epochs, losses['val'], '--r')
<ipython-input-16-dde92868b554> in <module>
19
20 model, losses, accuracies =
train_val_model(model_conv, criterion,
optimizer_conv,
---> 21 exp_lr_scheduler, dataloaders, num_epochs=25, log_interval=5)
22
23 epochs = np.arange(len(losses['train']))
SEVEN
• genindex
• modinde
x
• search
315