Eigenfaces Vs Fisher Faces Presentation
Eigenfaces Vs Fisher Faces Presentation
Eigenfaces Vs Fisher Faces Presentation
by Marian Moise
Overview
● Eigenface is still used today and its main idea comes from the fact that each
face image can be reconstructed based on the weighted average of the principal
components of the original training set of face images.
Reconstruction is performed by projecting a new image into the subspace
spanned by the eigenfaces (“face space”) and then classifying the face by
comparing its position in face space with the positions of known individuals
● On the other hand Fisherface is based on LDA technique which searches for
those vectors in the underlying space that best discriminate among classes
(rather than those that best describe the data).
More formally, given a number of independent features relative to which data is
described, LDA creates a linear combination of these which yields the largest
mean differences between the desired classes.
PCA vs FLD
k
Φ i =∑ w j∗u j
j=1
Recognition overview
●Initialization: Acquire the training set of face
images and calculate the eigenfaces, which define
the face space.
[]
a11
[ ]
a11 a12 ⋯ a1N ⋮
a 1N
⋮ ⋮ ⋱ ⋮
I i= i= ⋮
a21 a22 ⋯ a2N
a 2N
a N1 a N2 ⋯ a NN
⋮
a NN
Eigenfaces algorithm
Compute the average face vector
M
Ψ:
1
= ∑ i
M i=1
i = i −
Eigenfaces algorithm
[]
w1
w2
i =
⋮
wK
Recognition process
Let's say we have a face image Γ that is to be
recognized, then the following steps should be
performed:
1) Face normalization: Φ=Γ-Ψ
2) Project this normalized probe onto the
eigenspace and find the weights:
T
w j=u j ∗i
er =min∥−i∥
n
d p , q= ∑ pi −qi 2
i=1
Mahalanobis distance:
T
∣W SB W∣
Wopt=argmaxW T
=[ w w
1 2 w m ]
∣W SW W∣
Fisherfaces
Between-class scatter matrix is defined as:
c
S B= ∑ N i μ i− μμi −μ T
i=1
Within-class scatter matrix is defined as:
c
S W =∑ ∑ xk− μi xk −μ i T
i=1 xk ∈X i
where μi the mean image of class Xi and Ni the number of samples in class Xi
Singularity issue
In the face recognition problem, one is confronted with the difficulty that the
within-class scatter matrix SW is always singular. This stems from the fact that
the rank of SW is at most N-c and, in general, the number of images in the
learning set N is much smaller than the number of pixels in each image n.
T T T
W =W W
opt fld pca
Singularity issue
T
W pca=argmaxW ∣W ST W∣
T T
∣W W S W W∣
pca T pca
W fld =argmax W T T
∣W W S W W∣
pca W pca
Results
Related work
➔ PCA can outperform LDA when the training dataset is small or when
there are chosen a few eigenvectors.
➔both methods performed well if presented with an image in the test set
which is similar to an image in the training set.
Questions?
THANK YOU !